RetinaNet-PyTorch - A RetinaNet Pytorch Implementation on remote sensing images and has the similar mAP result with RetinaNet in MMdetection

Overview

🚀 RetinaNet Horizontal Detector Based PyTorch

This is a horizontal detector RetinaNet implementation on remote sensing ship dataset (SSDD).
This re-implemented retinanet has the almost the same mAP(iou=0.25, score_iou=0.15) with the MMdetection.
RetinaNet Detector original paper link is here.

🌟 Performance of the implemented RetinaNet Detector

Detection Performance on Inshore image.

Detection Performance on Offshore image.

🎯 Experiment

The SSDD dataset, well-trained retinanet detector, resnet-50 pretrained model on ImageNet, loss curve, evaluation metrics results are below, you could follow my experiment.

  • SSDD dataset BaiduYun extraction code=pa8j
  • gt labels for eval data set BaiduYun extraction code=vqaw (ground-truth)
  • gt labels for train data set BaiduYun extraction code=datk (train-ground-truth)
  • well-trained retinanet detector weight file BaiduYun extraction code=b0e1
  • pre-trained ImageNet resnet-50 weight file BaiduYun extraction code=mmql
  • evaluation metrics(iou=0.25, score_iou=0.15)
Batch Size Input Size mAP (Mine) mAP (MMdet) Model Parameters
32 416 x 416 0.8828 0.8891 32.2 M
  • Other metrics (Precision/Recall/F1 score)
Precision (Mine) Precision (MMDet) Recall (Mine) Recall (MMdet) F1 score (Mine) F1 score(MMdet)
0.8077 0.8502 0.9062 0.91558 0.8541 0.8817
  • loss curve

  • mAP metrics on training set and val set

  • learning rate curve (using warmup lr rate)

💥 Get Started

Installation

A. Install requirements:

conda create -n retinanet python=3.7
conda activate retinanet
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch
pip install -r requirements.txt  

Note: If you meet some troubles about installing environment, you can see the check.txt for more details.

B. Install nms module:

cd utils/HBB_NMS_GPU
make

Demo

A. Set project's data path

you should set project's data path in config.py first.

# config.py
# Note: all the path should be absolute path.  
data_path = r'/$ROOT_PATH/SSDD_data/'  # absolute data root path  
output_path = r'/$ROOT_PATH/Output/'  # absolute model output path  
  
inshore_data_path = r'/$ROOT_PATH/SSDD_data_InShore/'  # absolute Inshore data path  
offshore_data_path = r'/$ROOT_PATH/SSDD_data_OffShore/'  # absolute Offshore data path  

# An example  
$ROOT_PATH
    -SSDD_data/
        -train/  # train set 
	    -*.jpg
	-val/  # val set
	    -*.jpg
	-annotations/  # gt label in json format (for coco evaluation method)  
	    -instances_train.json  
	    -instances_val.json  
	-ground-truth/  
	    -*.txt  # gt label in txt format (for voc evaluation method and evaluae inshore and offshore scence)  
	-train-ground-truth/
	    -*.txt  # gt label in txt format (for voc evaluation method)
    -SSDD_data_InShore/  
        -images/
	    -*.jpg  # inshore scence images
	-ground-truth/
	    -*.txt  # inshore scence gt labels  
    -SSDD_data_OffShore/  
        -images/  
	    -*.jpg  # offshore scence images
	-ground-truth/  
	    -*.txt  # offshore scence gt labels

    -Output/
        -checkpoints/
	    - the path of saving tensorboard log event
	-evaluate/  
	    - the path of saving model detection results for evaluate (coco/voc/inshore/offshore)  

B. you should download the well-trained SSDD Dataset weight file.

# download and put the well-trained pth file in checkpoints/ folder 
# and run the simple inferene script to get detection result  
# you can find the model output predict.jpg in show_result/ folder.  

python show.py --chkpt 54_1595.pth --result_path show_result --pic_name demo1.jpg  

Train

A. Prepare dataset

you should structure your dataset files as shown above.

B. Manual set project's hyper parameters

you should manual set projcet's hyper parameters in config.py

1. data file structure (Must Be Set !)  
   has shown above.  

2. Other settings (Optional)  
   if you want to follow my experiment, dont't change anything.  

C. Train RetinaNet detector on SSDD dataset with pretrianed resnet-50 from scratch

C.1 Download the pre-trained resnet-50 pth file

you should download the pre-trained ImageNet Dataset resnet-50 pth file first and put this pth file in resnet_pretrained_pth/ folder.

C.2 Train RetinaNet Detector on SSDD Dataset with pre-trained pth file

# with batchsize 32 and using voc evaluation method during training for 50 epochs  
python train.py --batch_size 32 --epoch 50 --eval_method voc  
  
# with batchsize 32 and using coco evalutation method during training for 50 epochs  
python train.py --batch_size 32 --epoch 50 --eval_method coco  

Note: If you find classification loss change slowly, please be patient, it's not a mistake.

Evaluation

A. evaluate model performance on val set.

python eval.py --device 0 --evaluate True --FPS False --Offshore False --Inshore False --chkpt 54_1595.pth

B. evaluate model performance on InShore and Offshore sences.

python eval.py --device 0 --evaluate False --FPS False --Offshore True --Inshore True --chkpt 54_1595.pth

C. evaluate model FPS

python eval.py --device 0 --evaluate False --FPS True --Offshore False --Inshore Fasle --chkpt 54_1595.pth

💡 Inferences

Thanks for these great work.
https://github.com/ming71/DAL
https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch

Owner
Fang Zhonghao
Fang Zhonghao
DimReductionClustering - Dimensionality Reduction + Clustering + Unsupervised Score Metrics

Dimensionality Reduction + Clustering + Unsupervised Score Metrics Introduction

11 Nov 15, 2022
Pytorch implementation of our method for high-resolution (e.g. 2048x1024) photorealistic video-to-video translation.

vid2vid Project | YouTube(short) | YouTube(full) | arXiv | Paper(full) Pytorch implementation for high-resolution (e.g., 2048x1024) photorealistic vid

NVIDIA Corporation 8.1k Jan 01, 2023
iris - Open Source Photos Platform Powered by PyTorch

Open Source Photos Platform Powered by PyTorch. Submission for PyTorch Annual Hackathon 2021.

Omkar Prabhu 137 Sep 10, 2022
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 78 Dec 10, 2022
SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation

SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation This repo is the official implementation for SegTransVAE. Seg

Nguyen Truong Hai 4 Aug 04, 2022
ACL'2021: LM-BFF: Better Few-shot Fine-tuning of Language Models

LM-BFF (Better Few-shot Fine-tuning of Language Models) This is the implementation of the paper Making Pre-trained Language Models Better Few-shot Lea

Princeton Natural Language Processing 607 Jan 07, 2023
Face Transformer for Recognition

Face-Transformer This is the code of Face Transformer for Recognition (https://arxiv.org/abs/2103.14803v2). Recently there has been great interests of

Zhong Yaoyao 153 Nov 30, 2022
In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

3 Nov 23, 2022
Deep metric learning methods implemented in Chainer

Deep Metric Learning Implementation of several methods for deep metric learning in Chainer v4.2.0. Proxy-NCA: No Fuss Distance Metric Learning using P

ronekko 156 Nov 28, 2022
PyTorch implementations of Top-N recommendation, collaborative filtering recommenders.

PyTorch implementations of Top-N recommendation, collaborative filtering recommenders.

Yoonki Jeong 129 Dec 22, 2022
Tiny Kinetics-400 for test

Kinetics-400迷你数据集 English | 简体中文 该数据集旨在解决的问题:参照Kinetics-400数据格式,训练基于自己数据的视频理解模型。 数据集介绍 Kinetics-400是视频领域benchmark常用数据集,详细介绍可以参考其官方网站Kinetics。整个数据集包含40

38 Jan 06, 2023
Deep generative modeling for time-stamped heterogeneous data, enabling high-fidelity models for a large variety of spatio-temporal domains.

Neural Spatio-Temporal Point Processes [arxiv] Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel Abstract. We propose a new class of parameterizations

Facebook Research 75 Dec 19, 2022
Boostcamp CV Serving For Python

Boostcamp-CV-Serving Prerequisites MySQL GCP Cloud Storage GCP key file Sentry Streamlit Cloud Secrets: .streamlit/secrets.toml #DO NOT SHARE THIS I

Jungwon Seo 19 Feb 22, 2022
Unofficial PyTorch Implementation of "Augmenting Convolutional networks with attention-based aggregation"

Pytorch Implementation of Augmenting Convolutional networks with attention-based aggregation This is the unofficial PyTorch Implementation of "Augment

DK 20 Sep 09, 2022
Online-compatible Unsupervised Non-resonant Anomaly Detection Repository

Online-compatible Unsupervised Non-resonant Anomaly Detection Repository Repository containing all scripts used in the studies of Online-compatible Un

0 Nov 09, 2021
basic tutorial on pytorch

Quick Tutorial on PyTorch PyTorch Basics Linear Regression Logistic Regression Artificial Neural Networks Convolutional Neural Networks Recurrent Neur

7 Sep 15, 2022
GRF: Learning a General Radiance Field for 3D Representation and Rendering

GRF: Learning a General Radiance Field for 3D Representation and Rendering [Paper] [Video] GRF: Learning a General Radiance Field for 3D Representatio

Alex Trevithick 243 Dec 29, 2022
PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR)

Ilya Kostrikov 3k Dec 31, 2022
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

212 Dec 25, 2022
🛠 All-in-one web-based IDE specialized for machine learning and data science.

All-in-one web-based development environment for machine learning Getting Started • Features & Screenshots • Support • Report a Bug • FAQ • Known Issu

Machine Learning Tooling 2.9k Jan 09, 2023