Aerial Imagery dataset for fire detection: classification and segmentation (Unmanned Aerial Vehicle (UAV))

Overview

Aerial Imagery dataset for fire detection: classification and segmentation using Unmanned Aerial Vehicle (UAV)

Title

FLAME (Fire Luminosity Airborne-based Machine learning Evaluation) Dataset
Alt Text

Paper

You can find the article related to this code here at Elsevier or
You can find the preprint from the Arxiv website.

Dataset

  • The dataset is uploaded on IEEE dataport. You can find the dataset here at IEEE Dataport or DOI. IEEE account is free, so you can create an account and access the dataset files without any payment or subscription.

  • This table below shows all available data for the dataset.

  • This project uses items 7, 8, 9, and 10 from the dataset. Items 7 and 8 are being used for the "Fire_vs_NoFire" image classification. Items 9 and 10 are for the fire segmentation.

  • If you clone this repository on your local drive, please download item 7 from the dataset and unzip in directory /frames/Training/... for the Training phase of the "Fire_vs_NoFire" image classification. The direcotry looks like this:

Repository/frames/Training
                    ├── Fire/*.jpg
                    ├── No_Fire/*.jpg
  • For testing your trained model, please use item 8 and unzip it in direcotry /frame/Test/... . The direcotry looks like this:
Repository/frames/Test
                    ├── Fire/*.jpg
                    ├── No_Fire/*.jpg
  • Items 9 and 10 should be unzipped in these directories frames/Segmentation/Data/Image/... and frames/Segmentation/Data/Masks/... accordingly. The direcotry looks like this:
Repository/frames/Segmentation/Data
                                ├── Images/*.jpg
                                ├── Masks/*.png
  • Please remove other README files from those directories and make sure that only images are there.

Model

  • The binary fire classifcation model of this project is based on the Xception Network:

Alt text

  • The fire segmentation model of this project is based on the U-NET:

Alt text

Sample

  • A short sample video of the dataset is available on YouTube: Alt text

Requirements

  • os
  • re
  • cv2
  • copy
  • tqdm
  • scipy
  • pickle
  • numpy
  • random
  • itertools
  • Keras 2.4.0
  • scikit-image
  • Tensorflow 2.3.0
  • matplotlib.pyplot

Code

This code is run and tested on Python 3.6 on linux (Ubuntu 18.04) machine with no issues. There is a config.py file in this directoy which shows all the configuration parameters such as Mode, image target size, Epochs, batch size, train_validation ratio, etc. All dependency files are available in the root directory of this repository.

  • To run the training phase for the "Fire_vs_NoFire" image classification, change the mode value to 'Training' in the config.py file. Like This
Mode = 'Training'

Make sure that you have copied and unzipped the data in correct direcotry.

  • To run the test phase for the "Fire_vs_NoFire" image classification, change the mode value to 'Classification' in the config.py file. Change This
Mode = 'Classification'

Make sure that you have copied and unzipped the data in correct direcotry.

  • To run the test phase for the Fire segmentation, change the mode value to 'Classification' in the config.py file. Change This
Mode = 'Segmentation'

Make sure that you have copied and unzipped the data in correct direcotry.

Then after setting your parameters, just run the main.py file.

python main.py

Results

  • Fire classification accuracy:

Alt text

  • Fire classification Confusion Matrix:

  • Fire segmentation metrics and evaluation:

Alt text

  • Comparison between generated masks and grount truth mask:

Alt text

  • Federated Learning sample
    To consider future challenges, we defined a new sample of federated learning on a local node (NVidia Jetson Nano, 4GB RAM). Jetson Nano is available in two versions: 1) 4GB RAM developer kit, and 2) 2GB RAM developer kit. In this Implementation, the 4GB version is used with the technical specifications of a 128-core Maxwell GPU, a Quad-core ARM A57 @ 1.43 GHz CPU, 4GB LPDDR4 RAM, and a 32GB microSD storage. To test Jetson Nano for the federated learning, items (9) and (10) from Dataset are used for the fire segmentation. Since Jetson Nano has limited RAM, we assumed that each drone has access to a portion of the FLAME dataset. Only 500 fire images and masks are considered for the training and validation phase on the drone. As we aimed at learning a model on a smaller subset of the FLAME dataset and inferring that model, the default Tensorflow version is used here. Also, the image and mask dimension for each input is reduced to 128 x 128 x 3 rather than 512 x 512 x 3. To save more memory on the RAM, all peripherals were turned off and only WiFi was working at that time for the Secure Shell (SSH) connection. The setup of this node is:

Citation

If you find it useful, please cite our paper as follows:

@article{shamsoshoara2021aerial,
  title={Aerial Imagery Pile burn detection using Deep Learning: the FLAME dataset},
  author={Shamsoshoara, Alireza and Afghah, Fatemeh and Razi, Abolfazl and Zheng, Liming and Ful{\'e}, Peter Z and Blasch, Erik},
  journal={Computer Networks},
  pages={108001},
  year={2021},
  publisher={Elsevier}
}

Other related repositories and articles

License

For academtic and non-commercial usage

Owner
Ph.D. in Informatics and Computing from Northern Arizona University, M.Sc. in Informatics, M.Sc, in Electrical Engineering, B.Sc. in Electrical Engineering
This is the official implement of paper "ActionCLIP: A New Paradigm for Action Recognition"

This is an official pytorch implementation of ActionCLIP: A New Paradigm for Video Action Recognition [arXiv] Overview Content Prerequisites Data Prep

268 Jan 09, 2023
Apply AnimeGAN-v2 across frames of a video clip

title emoji colorFrom colorTo sdk app_file pinned AnimeGAN-v2 For Videos 🔥 blue red gradio app.py false AnimeGAN-v2 For Videos Apply AnimeGAN-v2 acro

Nathan Raw 36 Oct 18, 2022
📚 A collection of all the Deep Learning Metrics that I came across which are not accuracy/loss.

📚 A collection of all the Deep Learning Metrics that I came across which are not accuracy/loss.

Rahul Vigneswaran 1 Jan 17, 2022
A multilingual version of MS MARCO passage ranking dataset

mMARCO A multilingual version of MS MARCO passage ranking dataset This repository presents a neural machine translation-based method for translating t

75 Dec 27, 2022
Computations and statistics on manifolds with geometric structures.

Geomstats Code Continuous Integration Code coverage (numpy) Code coverage (autograd, tensorflow, pytorch) Documentation Community NEWS: Geomstats is r

875 Dec 31, 2022
A Home Assistant custom component for Lobe. Lobe is an AI tool that can classify images.

Lobe This is a Home Assistant custom component for Lobe. Lobe is an AI tool that can classify images. This component lets you easily use an exported m

Kendell R 4 Feb 28, 2022
Yet Another Reinforcement Learning Tutorial

This repo contains self-contained RL implementations

Sungjoon 65 Dec 10, 2022
Official implementation of the ICLR 2021 paper

You Only Need Adversarial Supervision for Semantic Image Synthesis Official PyTorch implementation of the ICLR 2021 paper "You Only Need Adversarial S

Bosch Research 272 Dec 28, 2022
Official and maintained implementation of the paper "OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data" [BMVC 2021].

OSS-Net: Memory Efficient High Resolution Semantic Segmentation of 3D Medical Data Christoph Reich, Tim Prangemeier, Özdemir Cetin & Heinz Koeppl | Pr

Christoph Reich 23 Sep 21, 2022
Codes and Data Processing Files for our paper.

Code Scripts and Processing Files for EEG Sleep Staging Paper 1. Folder Tree ./src_preprocess (data preprocessing files for SHHS and Sleep EDF) sleepE

Chaoqi Yang 18 Dec 12, 2022
Code for Max-Margin Contrastive Learning - AAAI 2022

Max-Margin Contrastive Learning This is a pytorch implementation for the paper Max-Margin Contrastive Learning accepted to AAAI 2022. This repository

Anshul Shah 12 Oct 22, 2022
Predicting Tweet Sentiment Maching Learning and streamlit

Predicting-Tweet-Sentiment-Maching-Learning-and-streamlit (I prefere using Visual Studio Code ) Open the folder in VS Code Run the first cell in requi

1 Nov 20, 2021
HiFi++: a Unified Framework for Neural Vocoding, Bandwidth Extension and Speech Enhancement

HiFi++ : a Unified Framework for Neural Vocoding, Bandwidth Extension and Speech Enhancement This is the unofficial implementation of Vocoder part of

Rishikesh (ऋषिकेश) 118 Dec 29, 2022
CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification (ICCV2021)

CM-NAS Official Pytorch code of paper CM-NAS: Cross-Modality Neural Architecture Search for Visible-Infrared Person Re-Identification in ICCV2021. Vis

JDAI-CV 40 Nov 25, 2022
Neural Network Libraries

Neural Network Libraries Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. W

Sony 2.6k Dec 30, 2022
Implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Graphs".

PPO-BiHyb This is the official implementation of our NeurIPS 2021 paper "A Bi-Level Framework for Learning to Solve Combinatorial Optimization on Grap

<a href=[email protected]"> 66 Nov 23, 2022
An implementation of "Learning human behaviors from motion capture by adversarial imitation"

Merel-MoCap-GAIL An implementation of Merel et al.'s paper on generative adversarial imitation learning (GAIL) using motion capture (MoCap) data: Lear

Yu-Wei Chao 34 Nov 12, 2022
Continual Learning of Long Topic Sequences in Neural Information Retrieval

ContinualPassageRanking Repository for the paper "Continual Learning of Long Topic Sequences in Neural Information Retrieval". In this repository you

0 Apr 12, 2022
[AAAI2022] Source code for our paper《Suppressing Static Visual Cues via Normalizing Flows for Self-Supervised Video Representation Learning》

SSVC The source code for paper [Suppressing Static Visual Cues via Normalizing Flows for Self-Supervised Video Representation Learning] samples of the

7 Oct 26, 2022
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

Huan Yin 37 Oct 09, 2022