YOLOX-Paddle - A reproduction of YOLOX by PaddlePaddle

Overview

YOLOX-Paddle

A reproduction of YOLOX by PaddlePaddle

数据集准备

下载COCO数据集,准备为如下路径

/home/aistudio
|-- COCO
|   |-- annotions
|   |-- train2017
|   |-- val2017

除了常用的图像处理库,需要安装额外的包

pip install gputil==1.4.0 loguru pycocotools

进入仓库根目录,编译安装(推荐使用AIStudio

cd YOLOX-Paddle
pip install -v -e .

如果使用本地机器出现编译失败,需要修改YOLOX-Paddle/yolox/layers/csrc/cocoeval/cocoeval.h中导入pybind11的include文件为本机目录,使用如下命令获取pybind11include目录

>>> import pybind11
>>> pybind11.get_include()
'/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/pybind11/include'

AIStudio路径

#include </opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/pybind11/include/pybind11/numpy.h>
#include </opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/pybind11/include/pybind11/pybind11.h>
#include </opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/pybind11/include/pybind11/stl.h>
#include </opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/pybind11/include/pybind11/stl_bind.h>

成功后使用pip list可看到安装模块

yolox    0.1.0    /home/aistudio/YOLOX-Paddle

设置YOLOX_DATADIR环境变量\或者`ln -s /path/to/your/COCO ./datasets/COCO`来指定COCO数据集位置

export YOLOX_DATADIR=/home/aistudio/

训练

python tools/train.py -n yolox-nano -d 1 -b 64

得到的权重保存至./YOLOX_outputs/nano/yolox_nano.pdparams

验证

python tools/eval.py -n yolox-nano -c ./YOLOX_outputs/nano/yolox_nano.pdparams -b 64 -d 1 --conf 0.001
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.259
 Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.416
 Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.269
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.083
 Average Precision  (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.274
 Average Precision  (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.413
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=  1 ] = 0.242
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 10 ] = 0.384
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.419
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.154
 Average Recall     (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.470
 Average Recall     (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.632

并提供了官方预训练权重,code:ybxc

Model size mAPval
0.5:0.95
mAPtest
0.5:0.95
Speed V100
(ms)
Params
(M)
FLOPs
(G)
YOLOX-s 640 40.5 40.5 9.8 9.0 26.8
YOLOX-m 640 46.9 47.2 12.3 25.3 73.8
YOLOX-l 640 49.7 50.1 14.5 54.2 155.6
YOLOX-x 640 51.1 51.5 17.3 99.1 281.9
YOLOX-Darknet53 640 47.7 48.0 11.1 63.7 185.3

推理

python tools/demo.py image -n yolox-nano -c ./YOLOX_outputs/nano/yolox_nano.pdparams --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result

推理结果如下所示

Train Custom Data

相信这是大部分开发者最关心的事情,本章节参考如下仓库,本仓库现已集成

  • Converting darknet or yolov5 datasets to COCO format for YOLOX: YOLO2COCO from Daniel

数据准备

我们同样以YOLOv5格式的光栅数据集为例,可在此处下载 进入仓库根目录,下载解压,数据集应该具有如下目录:

YOLOX-Paddle
|-- guangshan
|   |-- images
|      |-- train
|      |-- val
|   |-- labels
|      |-- train
|      |-- val

现在运行如下命令

bash prepare.sh

然后添加一个classes.txt,你应该得到如下目录,并在生成的YOLOV5_COCO_format得到COCO数据格式的数据集:

YOLOX-Paddle/YOLO2COCO/dataset
|-- YOLOV5
|   |-- guangshan
|   |   |-- images
|   |   |-- labels
|   |-- train.txt
|   |-- val.txt
|   |-- classes.txt
|-- YOLOV5_COCO_format
|   |-- train2017
|   |-- val2017
|   |-- annotations

可参考YOLOV5_COCO_format下的README.md

训练、验证、推理

配置custom训练文件YOLOX-Paddle/exps/example/custom/nano.py,修改self.num_classes为你的类别数,其余配置可根据喜好调参,使用如下命令启动训练

python tools/train.py -f ./exps/example/custom/nano.py -n yolox-nano -d 1 -b 8

使用如下命令启动验证

python tools/eval.py -f ./exps/example/custom/nano.py -n yolox-nano -c ./YOLOX_outputs_custom/nano/best_ckpt.pdparams -b 64 -d 1 --conf 0.001

使用如下命令启动推理

python tools/demo.py image -f ./exps/example/custom/nano.py -n yolox-nano -c ./YOLOX_outputs_custom/nano/best_ckpt.pdparams --path test.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result

其余部分参考COCO数据集,整个训练文件保存在YOLOX_outputs_custom文件夹

关于作者

姓名 郭权浩
学校 电子科技大学研2020级
研究方向 计算机视觉
CSDN主页 Deep Hao的CSDN主页
GitHub主页 Deep Hao的GitHub主页
如有错误,请及时留言纠正,非常蟹蟹!
后续会有更多论文复现系列推出,欢迎大家有问题留言交流学习,共同进步成长!
Owner
QuanHao Guo
Master at UESTC
QuanHao Guo
Tensorflow-seq2seq-tutorials - Dynamic seq2seq in TensorFlow, step by step

seq2seq with TensorFlow Collection of unfinished tutorials. May be good for educational purposes. 1 - simple sequence-to-sequence model with dynamic u

Matvey Ezhov 1k Dec 17, 2022
Agile SVG maker for python

Agile SVG Maker Need to draw hundreds of frames for a GIF? Need to change the style of all pictures in a PPT? Need to draw similar images with differe

SemiWaker 4 Sep 25, 2022
A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Master Release Pytorch - Py + Nim A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen. Because Nim compiles to C+

Giovanni Petrantoni 425 Dec 22, 2022
Specificity-preserving RGB-D Saliency Detection

Specificity-preserving RGB-D Saliency Detection Authors: Tao Zhou, Huazhu Fu, Geng Chen, Yi Zhou, Deng-Ping Fan, and Ling Shao. 1. Preface This reposi

Tao Zhou 35 Jan 08, 2023
This is a deep learning-based method to segment deep brain structures and a brain mask from T1 weighted MRI.

DBSegment This tool generates 30 deep brain structures segmentation, as well as a brain mask from T1-Weighted MRI. The whole procedure should take ~1

Luxembourg Neuroimaging (Platform OpNeuroImg) 2 Oct 25, 2022
Distributionally robust neural networks for group shifts

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization This code implements the g

151 Dec 25, 2022
A GPT, made only of MLPs, in Jax

MLP GPT - Jax (wip) A GPT, made only of MLPs, in Jax. The specific MLP to be used are gMLPs with the Spatial Gating Units. Working Pytorch implementat

Phil Wang 53 Sep 27, 2022
The datasets and code of ACL 2021 paper "Aspect-Category-Opinion-Sentiment Quadruple Extraction with Implicit Aspects and Opinions".

Aspect-Category-Opinion-Sentiment (ACOS) Quadruple Extraction This repo contains the data sets and source code of our paper: Aspect-Category-Opinion-S

NUSTM 144 Jan 02, 2023
AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention

AdaNet is a lightweight TensorFlow-based framework for automatically learning high-quality models with minimal expert intervention. AdaNet buil

3.4k Jan 07, 2023
scalingscattering

Scaling The Scattering Transform : Deep Hybrid Networks This repository contains the experiments found in the paper: https://arxiv.org/abs/1703.08961

Edouard Oyallon 78 Dec 21, 2022
This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis

This repository contains the official implementation code of the paper Transformer-based Feature Reconstruction Network for Robust Multimodal Sentiment Analysis, accepted at ACMMM 2021.

Ziqi Yuan 10 Sep 30, 2022
Robotics environments

Robotics environments Details and documentation on these robotics environments are available in OpenAI's blog post and the accompanying technical repo

Farama Foundation 121 Dec 28, 2022
NeurIPS workshop paper 'Counter-Strike Deathmatch with Large-Scale Behavioural Cloning'

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning Tim Pearce, Jun Zhu Offline RL workshop, NeurIPS 2021 Paper: https://arxiv.org/abs/2104

Tim Pearce 169 Dec 26, 2022
Drone-based Joint Density Map Estimation, Localization and Tracking with Space-Time Multi-Scale Attention Network

DroneCrowd Paper Detection, Tracking, and Counting Meets Drones in Crowds: A Benchmark. Introduction This paper proposes a space-time multi-scale atte

VisDrone 98 Nov 16, 2022
[IROS2021] NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences

NYU-VPR This repository provides the experiment code for the paper Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymiza

Automation and Intelligence for Civil Engineering (AI4CE) Lab @ NYU 22 Sep 28, 2022
A library for researching neural networks compression and acceleration methods.

A library for researching neural networks compression and acceleration methods.

Intel Labs 100 Dec 29, 2022
Transformer part of 12th place solution in Riiid! Answer Correctness Prediction

kaggle_riiid Transformer part of 12th place solution in Riiid! Answer Correctness Prediction. Please see here for more information. Execution You need

Sakami Kosuke 2 Apr 23, 2022
Code for Fold2Seq paper from ICML 2021

[ICML2021] Fold2Seq: A Joint Sequence(1D)-Fold(3D) Embedding-based Generative Model for Protein Design Environment file: environment.yml Data and Feat

International Business Machines 43 Dec 04, 2022
A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

A curated list of awesome resources related to Semantic Search🔎 and Semantic Similarity tasks.

224 Jan 04, 2023