Research on small target fire detection model based on improved YOLOv5

Fangpu LI, Xue RUI, Zijun LI, Weiguo SONG

Journal of Tsinghua University(Science and Technology) ›› 2025, Vol. 65 ›› Issue (4) : 655-663.

PDF(11784 KB)
PDF(11784 KB)
Journal of Tsinghua University(Science and Technology) ›› 2025, Vol. 65 ›› Issue (4) : 655-663. DOI: 10.16511/j.cnki.qhdxxb.2025.27.004
Fire in Forests

Research on small target fire detection model based on improved YOLOv5

Author information +
History +

Abstract

Objective: Fires are disaster events with destructive power. In relation to fire-related accidents, fire monitoring is one of the effective measures to reduce the casualties and economic losses caused by such incidents. Compared to traditional methods in fire monitoring, target detection has shown its strengths in terms of cost and outcome. Many researchers have investigated various ways to improve the efficiency of target detection by proposing new algorithms. Thus, numerous algorithms suited for fire monitoring applications have been proposed. However, these typically lack the capacity to detect small targets, which is the main characteristic of flame targets in incipient fires. To enhance the capacity to detect small targets for fire target detection, this paper improved the YOLOv5 algorithm and trained a model based on it with corresponding datasets collected. Methods: First, a fire image dataset with small target scene conditions is prepared for model training and performance testing. In the validation set, eight sets of mutually exclusive sub-datasets of environmental conditions are divided for the purpose of performance testing. Second, three improvements are introduced to improve the YOLOv5 algorithm: a) expansion of the multiscale detection layer to improve its receptive resolution; b) enhancement of the multiscale feature extraction capability by embedding the Swin transformer module, thus reducing the cost of calculation in algorithm deployment; and c) optimization of the postprocessing function by replacing the original algorithm with soft-NMS algorithm to maintain more potential adjacent targets. Next, an improved model YOLOv5s-SSS (swin transformer with soft-NMS for small target) is proposed. To verify the effect of every improvement and their contributions to the final model, the new model is evaluated using four sets of ablation experiments. After parameter optimization, a set of fire images is inputted into the models in the ablation experiment to compare and verify their outputs. Results: The ablation experimental results indicate that, first, all the improvements introduced into the algorithm are valid. Furthermore, the average accuracy of the improved model is 16.3% higher than that of the original algorithm in flame image targets under challenging scene conditions and 5.9% higher in normal-sized image targets. The verification result shows, compared to the original model, the improved model has obvious improvements in terms of reducing the location range of fire targets, thus minimizing the missing detection of small-sized and densely-distributed fire targets and clearly dividing densely or overlapping distributed fire targets. Conclusions: The dataset prepared in this paper can effectively support the training and testing of the improved fire detection algorithm model. Furthermore, the proposed model improvement has been shown to work effectively, along with the reliable performance test, thus providing a new improvement scheme for fire image detection technology. It can also serve as a reference in improving efficiency in various applications, such as accurate positioning of fire points in incipient forest fires and remote sensing monitoring of large-scale fires. However, the overall accuracy of the improved model is relatively low, possibly due to the images in the validation set being deliberately limited to small targets to assess the model's improvement. In the future, more improvements should be introduced to enhance the model's detection ability under various scenarios, such as low-light conditions, so that it can be adequate for industrial applications.

Key words

deep learning / image recognition / fire monitoring / YOLOv5 / small target object detection

Cite this article

Download Citations
Fangpu LI , Xue RUI , Zijun LI , et al. Research on small target fire detection model based on improved YOLOv5[J]. Journal of Tsinghua University(Science and Technology). 2025, 65(4): 655-663 https://doi.org/10.16511/j.cnki.qhdxxb.2025.27.004

References

1
LECUN Y , BENGIO Y , HINTON G . Deep learning[J]. Nature, 2015, 521 (7553): 436- 444.
2
REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]// Proceedings of 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, USA: IEEE, 2016: 779-788.
3
REDMON J, FARHADI A. YOLO9000: Better, faster, stronger[C]// Proceedings of 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Honolulu, USA: IEEE, 2017: 6517-6525.
4
REDMON J, FARHADI A. YOLOv3: An incremental improvement[J]. arXiv preprint arXiv: 1804.02767, 2018.
5
BOCHKOVSKIY A, WANG C Y, LIAO H Y M. YOLOv4: Optimal speed and accuracy of object detection[J]. arXiv preprint arXiv: 2004.10934, 2020.
6
GLENN J. YOLOv5[EB/OL]. (2021-10-12)[2024- 03-14]. https://github.com/ultralytics/yolov5.
7
伍济钢, 梁谋, 曹鸿, 等. 基于改进YOLOv5的PCB小目标缺陷检测研究[J]. 光电子·激光, 2024, 35 (2): 155- 163.
WU J G , LIANG M , CAO H , et al. Research on PCB small target defect detection based on improved YOLOv5[J]. Journal of Optoelectronics·Laser, 2024, 35 (2): 155- 163.
8
王洪义, 孔梅梅, 徐荣青. 基于改进YOLOV5的火焰检测算法[J]. 计算机与现代化, 2023 (1): 103- 107.
WANG H Y , KONG M M , XU R Q . Flame detection algorithm based on improved YOLOV5[J]. Computer and Modernization, 2023 (1): 103- 107.
9
冷坤, 秦伦明, 王悉. 基于CA-ASFF-YOLOv4的交通标志识别研究[J]. 计算机工程与应用, 2023, 59 (17): 169- 177.
LENG K , QIN L M , WANG X . Research on traffic sign recognition based on CA-ASFF-YOLOv4[J]. Computer Engineering and Applications, 2023, 59 (17): 169- 177.
10
RUI X , LI Z Q , ZHANG X Y , et al. A RGB-thermal based adaptive modality learning network for day-night wildfire identification[J]. International Journal of Applied Earth Observation and Geoinformation, 2023, 125, 103554.
11
SHAMSOSHOARA A , AFGHAH F , RAZI A , et al. Aerial imagery pile burn detection using deep learning: The FLAME dataset[J]. Computer Networks, 2021, 193, 108001.
12
EL-MADAFRI I , PEÑA M , OLMEDO-TORRE N . The wildfire dataset: Enhancing deep learning-based forest fire detection with a diverse evolving open-source dataset focused on data representativeness and a novel multi-task learning approach[J]. Forests, 2023, 14 (9): 1697.
13
CHINO D Y T, AVALHAIS L P S, RODRIGUES J F, et al. BoWFire: Detection of fire in still images by integrating pixel color and texture analysis[C]// Proceedings of 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images. Salvador, Brazil: IEEE, 2015: 95-102.
14
贾世娜. 基于改进YOLOv5的小目标检测算法研究[D]. 南昌: 南昌大学, 2022.
JIA S N. Research on small object detection algorithm based on improved YOLOv5[D]. Nanchang: Nanchang University, 2022. (in Chinese)
15
LIU Z, LIN Y T, CAO Y, et al. Swin transformer: Hierarchical vision transformer using shifted windows[C]// Proceedings of 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Montreal, Canada: IEEE, 2021: 9992-10002.
16
BODLA N, SINGH B, CHELLAPPA R, et al. Soft- NMS-improving object detection with one line of code[C]// Proceedings of 2017 IEEE International Conference on Computer Vision (ICCV). Venice, Italy: IEEE, 2017: 5562- 5570.
17
ZHU L, WANG X J, KE Z H, et al. BiFormer: Vision transformer with Bi-level routing attention[C]// Proceedings of 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Vancouver, Canada: IEEE, 2023: 10323-10333.
18
JOCHER G. Ultralytics[EB/OL]. (2023-01-10)[2024- 03-14]. https://github.com/ultralytics/ultralytics.
19
LAI H Q , CHEN L Y , LIU W H , et al. STC-YOLO: Small object detection network for traffic signs in complex environments[J]. Sensors, 2023, 23 (11): 5307.

RIGHTS & PERMISSIONS

All rights reserved. Unauthorized reproduction is prohibited.
PDF(11784 KB)

Accesses

Citation

Detail

Sections
Recommended

/