Official implementation of our paper "Learning to Bootstrap for Combating Label Noise"

Related tags

Deep LearningL2B
Overview

Learning to Bootstrap for Combating Label Noise

This repo is the official implementation of our paper "Learning to Bootstrap for Combating Label Noise".

Citation

If you use this code for your research, please cite our paper "Learning to Bootstrap for Combating Label Noise".

@misc{zhou2022learning,
      title={Learning to Bootstrap for Combating Label Noise}, 
      author={Yuyin Zhou and Xianhang Li and Fengze Liu and Xuxi Chen and Lequan Yu and Cihang Xie and Matthew P. Lungren and Lei Xing},
      year={2022},
      eprint={2202.04291},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Requirements

Python >= 3.6.4
Pytorch >= 1.6.0
Higher = 0.2.1
Tensorboardx = 2.4.1

Training

First, please create a folder to store checkpoints by using the following command.

mkdir checkpoint

CIFAR-10

To reproduce the results on CIFAR dataset from our paper, please follow the command and our hyper-parameters.

First, you can adjust the corruption_prob and corruption_type to obtain different noise rates and noise type.

Second, the reweight_label indicates you are using the our L2B method. You can change it to baseline or mixup.

python  main.py  --arch res18 --dataset cifar10 --num_classes 10 --exp L2B --train_batch_size  512 \
 --corruption_prob 0.2 --reweight_label  --lr 0.15  -clipping_norm 0.25  --num_epochs 300  --scheduler cos \
 --corruption_type unif  --warm_up 10  --seed 0  

CIFAR-100

Most of settings are the same as CIFAR-10. To reproduce the results, please follow the command.

python  main.py  --arch res18 --dataset cifar100 --num_classes 100 --exp L2B --train_batch_size  256  \
--corruption_prob 0.2 --reweight_label  --lr 0.15  --clipping_norm 0.80  --num_epochs 300  --scheduler cos \
--corruption_type unif  --warm_up 10  --seed 0 \ 

ISIC2019

On the ISIC dataset, first you should download the dataset by following command.

Download ISIC dataset as follows:
wget https://isic-challenge-data.s3.amazonaws.com/2019/ISIC_2019_Training_Input.zip
wget https://isic-challenge-data.s3.amazonaws.com/2019/ISIC_2019_Training_GroundTruth.csv \

Then you can reproduce the results by following the command.

python main.py  --arch res50  --dataset ISIC --data_path isic_data/ISIC_2019_Training_Input --num_classes 8 
--exp L2B  --train_batch_size 64  --corruption_prob 0.2 --lr 0.01 --clipping_norm 0.80 --num_epochs 30 
--temperature 10.0  --wd 5e-4  --scheduler cos --reweight_label --norm_type softmax --warm_up 1 

Clothing-1M

First, the num_batch and train_batch_size indicates how many training images you want to use (we sample a balanced training data for each epoch).

Second, you can adjust the num_meta to sample different numbers of validation images to form the metaset. We use the whole validation set as metaset by default.

The data_path is where you store the data and key-label lists. And also change the data_path in the line 20 of main.py. If you have issue for downloading the dataset, please feel free to contact us.

Then you can reproduce the results by following the command.

python main.py --arch res18_224 --num_batch 250 --dataset clothing1m \
--exp L2B_clothing1m_one_stage_multi_runs  --train_batch_size 256  --lr 0.005  \
--num_epochs 300  --reweight_label  --wd 5e-4 --scheduler cos   --warm_up 0 \
--data_path /data1/data/clothing1m/clothing1M  --norm_type org  --num_classes 14 \ 
--multi_runs 3 --num_meta 14313

Contact

Yuyin Zhou

Xianhang Li

If you have any question about the code and data, please contact us directly.

External Attention Network

Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks paper : https://arxiv.org/abs/2105.02358 EAMLP will come soon Jitto

MenghaoGuo 357 Dec 11, 2022
A no-BS, dead-simple training visualizer for tf-keras

A no-BS, dead-simple training visualizer for tf-keras TrainingDashboard Plot inter-epoch and intra-epoch loss and metrics within a jupyter notebook wi

Vibhu Agrawal 3 May 28, 2021
This repository holds the code for the paper "Deep Conditional Gaussian Mixture Model forConstrained Clustering".

Deep Conditional Gaussian Mixture Model for Constrained Clustering. This repository holds the code for the paper Deep Conditional Gaussian Mixture Mod

17 Oct 30, 2022
Evolution Strategies in PyTorch

Evolution Strategies This is a PyTorch implementation of Evolution Strategies. Requirements Python 3.5, PyTorch = 0.2.0, numpy, gym, universe, cv2 Wh

Andrew Gambardella 333 Nov 14, 2022
LSTMs (Long Short Term Memory) RNN for prediction of price trends

Price Prediction with Recurrent Neural Networks LSTMs BTC-USD price prediction with deep learning algorithm. Artificial Neural Networks specifically L

5 Nov 12, 2021
[CVPR 2021] Region-aware Adaptive Instance Normalization for Image Harmonization

RainNet — Official Pytorch Implementation Region-aware Adaptive Instance Normalization for Image Harmonization Jun Ling, Han Xue, Li Song*, Rong Xie,

130 Dec 11, 2022
a dnn ai project to classify which food people are eating on audio recordings

Deep Learning - EAT Challenge About This project is part of an AI challenge of the DeepLearning course 2021 at the University of Augsburg. The objecti

Marco Tröster 1 Oct 24, 2021
darija <-> english dictionary

darija-dictionary Having advanced IT solutions that are well adapted to the Moroccan context passes inevitably through understanding Moroccan dialect.

DODa 102 Jan 01, 2023
RTSeg: Real-time Semantic Segmentation Comparative Study

Real-time Semantic Segmentation Comparative Study The repository contains the official TensorFlow code used in our papers: RTSEG: REAL-TIME SEMANTIC S

Mennatullah Siam 592 Nov 18, 2022
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Peter Lin 6.5k Jan 04, 2023
Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder

ASEGAN: Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder 中文版简介 Readme with English Version 介绍 基于SEGAN模型的改进版本,使用自主设计的非

Nitin 53 Nov 17, 2022
Object Detection with YOLOv3

Object Detection with YOLOv3 Bu projede YOLOv3-608 modeli kullanılmıştır. Requirements Python 3.8 OpenCV Numpy Documentation Yolo ile ilgili detaylı b

Ayşe Konuş 0 Mar 27, 2022
Collection of in-progress libraries for entity neural networks.

ENN Incubator Collection of in-progress libraries for entity neural networks: Neural Network Architectures for Structured State Entity Gym: Abstractio

25 Dec 01, 2022
Predictive Maintenance LSTM

Predictive-Maintenance-LSTM - Predictive maintenance study for Complex case study, we've obtained failure causes by operational error and more deeply by design mistakes.

Amir M. Sadafi 1 Dec 31, 2021
Create Data & AI apps in 20 lines of code with Shimoku

Install with: pip install shimoku-api-python Start with: from os import getenv import shimoku_api_python.client as Shimoku

Shimoku 5 Nov 07, 2022
DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control

DeepFaceEditing: Deep Face Generation and Editing with Disentangled Geometry and Appearance Control One version of our system is implemented using the

260 Nov 28, 2022
Implementation for HFGI: High-Fidelity GAN Inversion for Image Attribute Editing

HFGI: High-Fidelity GAN Inversion for Image Attribute Editing High-Fidelity GAN Inversion for Image Attribute Editing Update: We released the inferenc

Tengfei Wang 371 Dec 30, 2022
Prompt Tuning with Rules

PTR Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification" If you use the code, please cite the following paper: @art

THUNLP 118 Dec 30, 2022
Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image

CenterPose Overview This repository is the official implementation of the paper "Single-stage Keypoint-based Category-level Object Pose Estimation fro

NVIDIA Research Projects 188 Dec 27, 2022