PyTorch implementation of the YOLO (You Only Look Once) v2

Overview

PyTorch implementation of the YOLO (You Only Look Once) v2

The YOLOv2 is one of the most popular one-stage object detector. This project adopts PyTorch as the developing framework to increase productivity, and utilize ONNX to convert models into Caffe 2 to benefit engineering deployment. If you are benefited from this project, a donation will be appreciated (via PayPal, 微信支付 or 支付宝).

Designs

  • Flexible configuration design. Program settings are configurable and can be modified (via configure file overlaping (-c/--config option) or command editing (-m/--modify option)) using command line argument.

  • Monitoring via TensorBoard. Such as the loss values and the debugging images (such as IoU heatmap, ground truth and predict bounding boxes).

  • Parallel model training design. Different models are saved into different directories so that can be trained simultaneously.

  • Using a NoSQL database to store evaluation results with multiple dimension of information. This design is useful when analyzing a large amount of experiment results.

  • Time-based output design. Running information (such as the model, the summaries (produced by TensorBoard), and the evaluation results) are saved periodically via a predefined time.

  • Checkpoint management. Several latest checkpoint files (.pth) are preserved in the model directory and the older ones are deleted.

  • NaN debug. When a NaN loss is detected, the running environment (data batch) and the model will be exported to analyze the reason.

  • Unified data cache design. Various dataset are converted into a unified data cache via corresponding cache plugins. Some plugins are already implemented. Such as PASCAL VOC and MS COCO.

  • Arbitrarily replaceable model plugin design. The main deep neural network (DNN) can be easily replaced via configuration settings. Multiple models are already provided. Such as Darknet, ResNet, Inception v3 and v4, MobileNet and DenseNet.

  • Extendable data preprocess plugin design. The original images (in different sizes) and labels are processed via a sequence of operations to form a training batch (images with the same size, and bounding boxes list are padded). Multiple preprocess plugins are already implemented. Such as augmentation operators to process images and labels (such as random rotate and random flip) simultaneously, operators to resize both images and labels into a fixed size in a batch (such as random crop), and operators to augment images without labels (such as random blur, random saturation and random brightness).

Feautures

  • Reproduce the original paper's training results.
  • Multi-scale training.
  • Dimension cluster.
  • Darknet model file (.weights) parser.
  • Detection from image and camera.
  • Processing Video file.
  • Multi-GPU supporting.
  • Distributed training.
  • Focal loss.
  • Channel-wise model parameter analyzer.
  • Automatically change the number of channels.
  • Receptive field analyzer.

Quick Start

This project uses Python 3. To install the dependent libraries, type the following command in a terminal.

sudo pip3 install -r requirements.txt

quick_start.sh contains the examples to perform detection and evaluation. Run this script. Multiple datasets and models (the original Darknet's format, will be converted into PyTorch's format) will be downloaded (aria2 is required). These datasets are cached into different data profiles, and the models are evaluated over the cached data. The models are used to detect objects in an example image, and the detection results will be shown.

License

This project is released as the open source software with the GNU Lesser General Public License version 3 (LGPL v3).

Owner
申瑞珉 (Ruimin Shen)
申瑞珉 (Ruimin Shen)
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [Official Paddle Implementation] [Huggingface Gradio Demo] [Unofficial

442 Dec 16, 2022
Tensorflow implementation of Human-Level Control through Deep Reinforcement Learning

Human-Level Control through Deep Reinforcement Learning Tensorflow implementation of Human-Level Control through Deep Reinforcement Learning. This imp

Devsisters Corp. 2.4k Dec 26, 2022
Detail-Preserving Transformer for Light Field Image Super-Resolution

DPT Official Pytorch implementation of the paper "Detail-Preserving Transformer for Light Field Image Super-Resolution" accepted by AAAI 2022 . Update

50 Jan 01, 2023
Official implementation of Unfolded Deep Kernel Estimation for Blind Image Super-resolution.

Unfolded Deep Kernel Estimation for Blind Image Super-resolution Hongyi Zheng, Hongwei Yong, Lei Zhang, "Unfolded Deep Kernel Estimation for Blind Ima

Z80 15 Dec 26, 2022
Benchmark for Answering Existential First Order Queries with Single Free Variable

EFO-1-QA Benchmark for First Order Query Estimation on Knowledge Graphs This repository contains an entire pipeline for the EFO-1-QA benchmark. EFO-1

HKUST-KnowComp 14 Oct 24, 2022
Finetune SSL models for MOS prediction

Finetune SSL models for MOS prediction This is code for our paper under review for ICASSP 2022: "Generalization Ability of MOS Prediction Networks" Er

Yamagishi and Echizen Laboratories, National Institute of Informatics 32 Nov 22, 2022
Python Wrapper for Embree

pyembree Python Wrapper for Embree Installation You can install pyembree (and embree) via the conda-forge package. $ conda install -c conda-forge pyem

Anthony Scopatz 67 Dec 24, 2022
Advances in Neural Information Processing Systems (NeurIPS), 2020.

What is being transferred in transfer learning? This repo contains the code for the following paper: Behnam Neyshabur*, Hanie Sedghi*, Chiyuan Zhang*.

Google Research 36 Aug 26, 2022
Emotion Recognition from Facial Images

Reconhecimento de Emoções a partir de imagens faciais Este projeto implementa um classificador simples que utiliza técncias de deep learning e transfe

Gabriel 2 Feb 09, 2022
Pytorch implementation of the paper "Optimization as a Model for Few-Shot Learning"

Optimization as a Model for Few-Shot Learning This repo provides a Pytorch implementation for the Optimization as a Model for Few-Shot Learning paper.

Albert Berenguel Centeno 238 Jan 04, 2023
[NeurIPS 2021] Low-Rank Subspaces in GANs

Low-Rank Subspaces in GANs Figure: Image editing results using LowRankGAN on StyleGAN2 (first three columns) and BigGAN (last column). Low-Rank Subspa

112 Dec 28, 2022
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
:boar: :bear: Deep Learning based Python Library for Stock Market Prediction and Modelling

bulbea "Deep Learning based Python Library for Stock Market Prediction and Modelling." Table of Contents Installation Usage Documentation Dependencies

Achilles Rasquinha 1.8k Jan 05, 2023
This repository contains the PyTorch implementation of the paper STaCK: Sentence Ordering with Temporal Commonsense Knowledge appearing at EMNLP 2021.

STaCK: Sentence Ordering with Temporal Commonsense Knowledge This repository contains the pytorch implementation of the paper STaCK: Sentence Ordering

Deep Cognition and Language Research (DeCLaRe) Lab 23 Dec 16, 2022
Github project for Attention-guided Temporal Coherent Video Object Matting.

Attention-guided Temporal Coherent Video Object Matting This is the Github project for our paper Attention-guided Temporal Coherent Video Object Matti

71 Dec 19, 2022
Implementation of Shape and Electrostatic similarity metric in deepFMPO.

DeepFMPO v3D Code accompanying the paper "On the value of using 3D-shape and electrostatic similarities in deep generative methods". The paper can be

34 Nov 28, 2022
Fluency ENhanced Sentence-bert Evaluation (FENSE), metric for audio caption evaluation. And Benchmark dataset AudioCaps-Eval, Clotho-Eval.

FENSE The metric, Fluency ENhanced Sentence-bert Evaluation (FENSE), for audio caption evaluation, proposed in the paper "Can Audio Captions Be Evalua

Zhiling Zhang 13 Dec 23, 2022
Implementation of paper "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement"

DCS-Net This is the implementation of "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement" Steps to run the model Edit V

Jack Walters 10 Apr 04, 2022
Predicting 10 different clothing types using Xception pre-trained model.

Predicting-Clothing-Types Predicting 10 different clothing types using Xception pre-trained model from Keras library. It is reimplemented version from

AbdAssalam Ahmad 3 Dec 29, 2021