AI-DRIVEN TRANSFORMER BASED FRAMEWORK FOR IDENTIFICATION AND TRACKING OF CHELONIAN SPECIES USING SATELLITE IMAGERY AND DRONE FOOTAGE
Abstract
Chelonian species, including turtles, tortoises, and terrapins, face increasing threats from habitat degradation, climate change, and illegal poaching. Accurate identification and tracking are crucial for effective conservation efforts. This study proposes a novel AI-driven framework utilizing Convolutional Neural Networks (CNNs) and Transformer-based Vision Models (ViTs) to automate the detection and monitoring of chelonian species from satellite imagery and drone footage. The framework employs YOLO (You Only Look Once) for real-time object detection and Swin Transformer for enhanced feature extraction across large-scale imagery. By combining spatial-temporal analysis with machine learning, the system can accurately distinguish between chelonian species, track their movements, and monitor population dynamics. Our approach integrates a hybrid classification model that combines CNN feature extraction with Long Short-Term Memory (LSTM) networks to analyze sequential movement patterns, enabling precise tracking over time. The proposed system is evaluated on diverse datasets, including open-source satellite archives and drone-captured videos, achieving over 95% accuracy in species identification and trajectory prediction. This AI-driven methodology significantly reduces manual effort, improves monitoring accuracy, and provides real-time insights for conservationists. The results demonstrate the effectiveness of integrating advanced AI algorithms in wildlife conservation, offering a scalable solution for long-term chelonian species preservation.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Chelonian Research Foundation

This work is licensed under a Creative Commons Attribution 4.0 International License.