A new algorithm for light-weight forest fire object detection was proposed based on YOLOv5s to address
the low accuracy, poor flexibility, and high software and hardware limitations of the previous UAV-embedded
equipment for forest fire inspection. The proposed algorithm replaced the backbone of YOLOv5s with the light-weight
network Shufflenetv2, employed the idea of channel recombination to improve the speed of the backbone network in
picture information extraction, and maintained both high accuracy and fast detection speed. Then, a coordinate
attention (CA) positional attention module specially designed for light-weight network was added to the connection
between Backbone and Neck, which could aggregate different position information of pictures into the channel, thus
improving the attention of the detected object. Finally, the CIOU loss function was utilized in the prediction part to
better optimize the ratio of length to width of the rectangular frame and accelerate the model convergence. The results
of the algorithm deployed on Jetson Xavier NX show that compared with the Faster-RCNN, SSD, YOLOv4, and
YOLOv5s experimental methods, the improved network model size was reduced by up to 98%, increasing the
precision to 92.6%, accuracy rate to 95.3%, and FPS to 132 frames/s. It can effectively achieve the real-time
prevention and detection of forest fire in daylight, darkness, or good visibility, exhibiting good accuracy and
robustness.