Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection"

Overview

CrossTeaching-SSOD

0. Introduction

Official code of "Mitigating the Mutual Error Amplification for Semi-Supervised Object Detection"

This repo includes training SSD300 and training Faster-RCNN-FPN on the Pascal VOC benchmark. The scripts about training SSD300 are based on ssd.pytorch (https://github.com/amdegroot/ssd.pytorch/). The scripts about training Faster-RCNN-FPN are based on the official Detectron2 repo (https://github.com/facebookresearch/detectron2/).

1. Environment

Python = 3.6.8

CUDA Version = 10.1

Pytorch Version = 1.6.0

detectron2 (for Faster-RCNN-FPN)

2. Prepare Dataset

Download and extract the Pascal VOC dataset.

For SSD300, specify the VOC_ROOT variable in data/voc0712.py and data/voc07_consistency.py as /home/username/dataset/VOCdevkit/

For Faster-RCNN-FPN, set the environmental variable in this way: export DETECTRON2_DATASETS=/home/username/dataset/VOCdevkit/

3. Instruction

3.1 Reproduce Table.1

Go into the SSD300 directory, then run the following scripts.

supervised training (VOC 07 labeled, without extra augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_ssd.py --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, without extra augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo39.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

supervised training (VOC 0712 labeled, without extra augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_ssd0712.py --save_interval 12000

supervised training (VOC 07 labeled, with horizontal flip):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_csd_sup2.py --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, with horizontal flip):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_csd.py --save_interval 12000

supervised training (VOC 0712 labeled, with horizontal flip):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_csd_sup_0712.py --save_interval 12000

supervised training (VOC 07 labeled, with mix-up augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_isd_sup2.py --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, with mix-up augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_only_isd.py --save_interval 12000

supervised training (VOC 0712 labeled, with mix-up augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_isd_sup_0712.py --save_interval 12000

3.2 Reproduce Table.2

Go into the SSD300 directory, then run the following scripts.

supervised training (VOC 07 labeled, without augmentation):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_ssd.py --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, confidence threshold=0.5):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo39.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (VOC 07 labeled + VOC 12 unlabeled, confidence threshold=0.8):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo39-0.8.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (random FP label, confidence threshold=0.5):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo102.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (use only TP, confidence threshold=0.5):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo36.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (use only TP, confidence threshold=0.8):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo36-0.8.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

self-labeling (use true label, confidence threshold=0.5):

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo32.py --resume weights/ssd300_12000.pth --ramp --save_interval 12000

Go into the detectron2 directory.

supervised training (VOC 07 labeled, go into VOC07-sup-bs16):

python3 train_net.py --num-gpus 8 --config configs/voc/voc07_voc12.yaml

self-labeling (VOC 07 labeled + VOC 12 unlabeled, go into VOC07-sup-VOC12-unsup-self-teaching-0.7):

python3 train_net.py --resume --num-gpus 8 --config configs/voc/voc07_voc12.yaml MODEL.WEIGHTS output/model_0005999.pth SOLVER.CHECKPOINT_PERIOD 18000

self-labeling (random FP label, go into VOC07-sup-VOC12-unsup-self-teaching-0.7-random-wrong):

python3 train_net.py --resume --num-gpus 8 --config configs/voc/voc07_voc12.yaml MODEL.WEIGHTS output/model_0005999.pth SOLVER.CHECKPOINT_PERIOD 18000

self-labeling (use true label, go into VOC07-sup-VOC12-unsup-self-teaching-0.7-only-correct):

python3 train_net.py --resume --num-gpus 8 --config configs/voc/voc07_voc12.yaml MODEL.WEIGHTS output/model_0005999.pth SOLVER.CHECKPOINT_PERIOD 18000

3.3 Reproduce Table.3

Go into the SSD300 directory, then run the following scripts.

cross teaching

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo137.py --resume weights/ssd300_12000.pth --resume2 weights/default/ssd300_12000.2.pth --save_interval 12000 --ramp --ema_rate 0.99 --ema_step 10

cross teaching + mix-up augmentation

CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python3 train_pseudo151.py --resume weights/ssd300_12000.pth --resume2 weights/default/ssd300_12000.2.pth --save_interval 12000 --ramp --ema_rate 0.99 --ema_step 10

Go into the detectron2/VOC07-sup-VOC12-unsup-cross-teaching directory.

cross teaching

python3 train_net.py --resume --num-gpus 8 --config configs/voc/voc07_voc12.yaml MODEL.WEIGHTS output/model_0005999.pth SOLVER.CHECKPOINT_PERIOD 18000

Owner
Bruno Ma
Phd candidate in NLPR in CASIA
Bruno Ma
A universal memory dumper using Frida

Fridump Fridump (v0.1) is an open source memory dumping tool, primarily aimed to penetration testers and developers. Fridump is using the Frida framew

551 Jan 07, 2023
Official implementation of our paper "Learning to Bootstrap for Combating Label Noise"

Learning to Bootstrap for Combating Label Noise This repo is the official implementation of our paper "Learning to Bootstrap for Combating Label Noise

21 Apr 09, 2022
Learning Optical Flow from a Few Matches (CVPR 2021)

Learning Optical Flow from a Few Matches This repository contains the source code for our paper: Learning Optical Flow from a Few Matches CVPR 2021 Sh

Shihao Jiang (Zac) 159 Dec 16, 2022
For auto aligning, cropping, and scaling HR and LR images for training image based neural networks

ImgAlign For auto aligning, cropping, and scaling HR and LR images for training image based neural networks Usage Make sure OpenCV is installed, 'pip

15 Dec 04, 2022
Open standard for machine learning interoperability

Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides

Open Neural Network Exchange 13.9k Dec 30, 2022
for taichi voxel-challange event

Taichi Voxel Challenge Figure: result of python3 example6.py. Please replace the image above (demo.jpg) with yours, so that other people can immediate

Liming Xu 20 Nov 26, 2022
An University Project of Quera Web Crawling.

WebCrawlerProject An University Project of Quera Web Crawling. خزشگر اینستاگرام در این پروژه شما باید با استفاده از کتابخانه های زیر یک خزشگر اینستاگر

Mahdi 3 Aug 12, 2022
Code and Data for NeurIPS2021 Paper "A Dataset for Answering Time-Sensitive Questions"

Time-Sensitive-QA The repo contains the dataset and code for NeurIPS2021 (dataset track) paper Time-Sensitive Question Answering dataset. The dataset

wenhu chen 35 Nov 14, 2022
DeepFashion2 is a comprehensive fashion dataset.

DeepFashion2 Dataset DeepFashion2 is a comprehensive fashion dataset. It contains 491K diverse images of 13 popular clothing categories from both comm

switchnorm 1.8k Jan 07, 2023
Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021.

Conformal time-series forecasting Implementation for Stankevičiūtė et al. "Conformal time-series forecasting", NeurIPS 2021. If you use our code in yo

Kamilė Stankevičiūtė 36 Nov 21, 2022
Контрольная работа по математическим методам машинного обучения

ML-MathMethods-Test Контрольная работа по математическим методам машинного обучения. Вычисление основных статистик, диаграмм и графиков, проверка разл

Stas Ivanovskii 1 Jan 06, 2022
Official code repository of the paper Learning Associative Inference Using Fast Weight Memory by Schlag et al.

Learning Associative Inference Using Fast Weight Memory This repository contains the offical code for the paper Learning Associative Inference Using F

Imanol Schlag 18 Oct 12, 2022
FOSS Digital Asset Distribution Platform built on Frappe.

Digistore FOSS Digital Assets Marketplace. Distribute digital assets, like a pro. Video Demo Here Features Create, attach and list digital assets (PDF

Mohammad Hussain Nagaria 30 Dec 08, 2022
Implementation of STAM (Space Time Attention Model), a pure and simple attention model that reaches SOTA for video classification

STAM - Pytorch Implementation of STAM (Space Time Attention Model), yet another pure and simple SOTA attention model that bests all previous models in

Phil Wang 109 Dec 28, 2022
the code of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021)

RMA-Net This repo is the implementation of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021). Paper

Wanquan Feng 205 Nov 09, 2022
Code for 2021 NeurIPS --- Towards Multi-Grained Explainability for Graph Neural Networks

ReFine: Multi-Grained Explainability for GNNs This is the official code for Towards Multi-Grained Explainability for Graph Neural Networks (NeurIPS 20

Shirley (Ying-Xin) Wu 47 Dec 16, 2022
This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework

neon_course This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework. For more information, see

Nervana 92 Jan 03, 2023
Deep generative modeling for time-stamped heterogeneous data, enabling high-fidelity models for a large variety of spatio-temporal domains.

Neural Spatio-Temporal Point Processes [arxiv] Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel Abstract. We propose a new class of parameterizations

Facebook Research 75 Dec 19, 2022
SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

SEOVER-Master This code is the implementation of paper: SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

4 Feb 24, 2022
MLOps will help you to understand how to build a Continuous Integration and Continuous Delivery pipeline for an ML/AI project.

page_type languages products description sample python azure azure-machine-learning-service azure-devops Code which demonstrates how to set up and ope

1 Nov 01, 2021