Fire detection automation in search drones using a modified DeepLabv3+ approach
Date published
Free to read from
Supervisor/s
Journal Title
Journal ISSN
Volume Title
Department
Course name
Type
ISSN
Format
Citation
Abstract
Drones have become a key component in current search and rescue applications such as wildfire detection. The accurate detection of fire in forests plays a crucial global factor to reduce environmental damage and the preservation of wildlife. Current fire detection systems combine the merits of expert-learning systems and light-weight deep learning architectures. The key idea is to introduce color-based rules to identify potential fire pixels and create the associated mask that feeds a light-weight convolutional neural network (CNN) for image segmentation. However, expert learning systems are not robust and suffer of cognitive bias that induce a high number of false positives. In addition, CNN-based architectures cannot capture long-range dependencies reducing the segmentation fidelity. To overcome these gaps, this paper proposes a light weight deep learning (DL) architecture for fire segmentation. The approach is inspired in the Deeplabv3+ architecture for image segmentation. The novelty lies in the incorporation of vision transformers that heavily reduces the model complexity and avoid the usage of color-based rules. Experiments are conducted using open-access fire datasets. The results demonstrate competitive performance and highlight its potential use in drones applications.