[ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang

Related tags

Deep LearningSDCLR
Overview

Self-Damaging Contrastive Learning

Introduction

The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervised training on real-world data applications. However, unlabeled data in reality is commonly imbalanced and shows a long-tail distribution, and it is unclear how robustly the latest contrastive learning methods could perform in the practical scenario. This paper proposes to explicitly tackle this challenge, via a principled framework called Self-Damaging Contrastive Learning (SDCLR), to automatically balance the representation learning without knowing the classes. Our main inspiration is drawn from the recent finding that deep models have difficult-to-memorize samples, and those may be exposed through network pruning [1]. It is further natural to hypothesize that long-tail samples are also tougher for the model to learn well due to insufficient examples. Hence, the key innovation in SDCLR is to create a dynamic self-competitor model to contrast with the target model, which is a pruned version of the latter. During training, contrasting the two models will lead to adaptive online mining of the most easily forgotten samples for the current target model, and implicitly emphasize them more in the contrastive loss. Extensive experiments across multiple datasets and imbalance settings show that SDCLR significantly improves not only overall accuracies but also balancedness, in terms of linear evaluation on the full-shot and few-shot settings.

[1] Hooker, Sara, et al. "What Do Compressed Deep Neural Networks Forget?." arXiv preprint arXiv:1911.05248 (2019).

Method

pipeline The overview of the proposed SDCLR framework. Built on top of simCLR pipeline by default, the uniqueness of SDCLR lies in its two different network branches: one is the target model to be trained, and the other "self-competitor" model that is pruned from the former online. The two branches share weights for their non-pruned parameters. Either branch has its independent batch normalization layers. Since the self-competitor is always obtained and updated from the latest target model, the two branches will co-evolve during training. Their contrasting will implicitly give more weights on long-tail samples.

Environment

Requirements:

pytorch 1.7.1 
opencv-python
scikit-learn 
matplotlib

Recommend installation cmds (linux)

conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.2 -c pytorch # change cuda version according to hardware
pip install opencv-python
conda install -c conda-forge scikit-learn matplotlib

Details about and Imagenet-100-LT Imagenet-LT-exp

Imagenet-100-LT sampling list

Imagenet-LT-exp sampling list

Pretrained models downloading

CIFAR10: pretraining, fine-tuning

CIFAR100: pretraining, fine-tuning

Imagenet100/Imagenet: pretraining, fine-tuning

Train and evaluate pretrained models

Before all

chmod +x  cmds/shell_scrips/*

CIFAR10

SimCLR on balanced training datasets

# pre-train and finetune
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-10-LT.sh -g 1 -w 8 --split split${split_num}_D_b
done

# evaluate pretrained model (after download and unzip the pretrained model)
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-10-LT.sh -g 1 -w 8 --split split${split_num}_D_b  --only_finetuning True  --test_only True
done

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset cifar10
# few shot
python exp_analyse.py --dataset cifar10 --fewShot

SimCLR on long tail training datasets

# pre-train and finetune
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-10-LT.sh -g 1 -w 8 --split split${split_num}_D_i
done

# evaluate pretrained model (after download and unzip the pretrained model)
for split_num in 1 2 3 4 5 
do
./cmds/shell_scrips/cifar-10-LT.sh -g 1 -w 8 --split split${split_num}_D_i --only_finetuning True --test_only True
done

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset cifar10 --LT
# few shot
python exp_analyse.py --dataset cifar10 --LT --fewShot

SDCLR on long tail training datasets

# pre-train and finetune
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-10-LT.sh -g 1 -w 8 --split split${split_num}_D_i --prune True --prune_percent 0.9 --prune_dual_bn True
done

# evaluate pretrained model (after download and unzip the pretrained model)
for split_num in 1 2 3 4 5 
do
./cmds/shell_scrips/cifar-10-LT.sh -g 1 -w 8 --split split${split_num}_D_i --prune True --prune_percent 0.9 --prune_dual_bn True --only_finetuning True --test_only True
done

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset cifar10 --LT --prune
# few shot
python exp_analyse.py --dataset cifar10 --LT --prune --fewShot

CIFAR100

SimCLR on balanced training datasets

# pre-train and finetune
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-100-LT.sh -g 1 -p 4867 -w 8 --split cifar100_split${split_num}_D_b
done

# evaluate pretrained model (after download and unzip the pretrained model)
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-100-LT.sh -g 1 -p 4867 -w 8 --split cifar100_split${split_num}_D_b --only_finetuning True --test_only True
done

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset cifar100
# few shot
python exp_analyse.py --dataset cifar100 --fewShot

SimCLR on long tail training datasets

# pre-train and finetune
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-100-LT.sh -g 1 -p 4867 -w 8 --split cifar100_split${split_num}_D_i
done

# evaluate pretrained model (after download and unzip the pretrained model)
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-100-LT.sh -g 1 -p 4867 -w 8 --split cifar100_split${split_num}_D_i --only_finetuning True --test_only True
done

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset cifar100 --LT
# few shot
python exp_analyse.py --dataset cifar100 --LT --fewShot

SDCLR on long tail training datasets

# pre-train and finetune
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-100-LT.sh -g 1 -p 4867 -w 8 --split cifar100_split${split_num}_D_i --prune True --prune_percent 0.9 --prune_dual_bn True
done

# evaluate pretrained model (after download and unzip the pretrained model)
for split_num in 1 2 3 4 5
do
./cmds/shell_scrips/cifar-100-LT.sh -g 1 -p 4867 -w 8 --split cifar100_split${split_num}_D_i --prune True --prune_percent 0.9 --prune_dual_bn True --only_finetuning True --test_only True
done

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset cifar100 --LT --prune
# few shot
python exp_analyse.py --dataset cifar100 --LT --prune --fewShot

Imagenet-100-LT

SimCLR on balanced training datasets

# pre-train and finetune
./cmds/shell_scrips/imagenet-100-res50-LT.sh --data \path\to\imagenet -g 2 -p 4867 -w 10 --split imageNet_100_BL_train

# evaluate pretrained model (after download and unzip the pretrained model)
./cmds/shell_scrips/imagenet-100-res50-LT.sh --data \path\to\imagenet -g 2 -p 4867 -w 10 --split imageNet_100_BL_train --only_finetuning True --test_only True

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset imagenet100
# few shot
python exp_analyse.py --dataset imagenet100 --fewShot

SimCLR on long tail training datasets

# pre-train and finetune
./cmds/shell_scrips/imagenet-100-res50-LT.sh --data \path\to\imagenet -g 2 -p 4867 -w 10 --split imageNet_100_LT_train

# evaluate pretrained model (after download and unzip the pretrained model)
./cmds/shell_scrips/imagenet-100-res50-LT.sh --data \path\to\imagenet -g 2 -p 4860 -w 10 --split imageNet_100_LT_train --only_finetuning True --test_only True

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset imagenet100 --LT
# few shot
python exp_analyse.py --dataset imagenet100 --LT --fewShot

SDCLR on long tail training datasets

# pre-train and finetune
./cmds/shell_scrips/imagenet-100-res50-LT.sh --data \path\to\imagenet -g 2 -p 4867 -w 10 --split imageNet_100_LT_train --prune True --prune_percent 0.3 --prune_dual_bn True --temp 0.3

# evaluate pretrained model (after download and unzip the pretrained model)
./cmds/shell_scrips/imagenet-100-res50-LT.sh --data \path\to\imagenet -g 2 -p 4860 -w 10 --split imageNet_100_LT_train --prune True --prune_percent 0.3 --prune_dual_bn True --temp 0.3 --only_finetuning True --test_only True

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset imagenet100 --LT --prune
# few shot
python exp_analyse.py --dataset imagenet100 --LT --prune --fewShot

Imagenet-Exp-LT

SimCLR on balanced training datasets

# pre-train and finetune
./cmds/shell_scrips/imagenet-res50-LT.sh --data \path\to\imagenet -g 2 -p 4867 -w 10 --split imageNet_BL_exp_train

# evaluate pretrained model (after download and unzip the pretrained model)
./cmds/shell_scrips/imagenet-res50-LT.sh --data \path\to\imagenet -g 2 -p 4867 -w 10 --split imageNet_BL_exp_train --only_finetuning True --test_only True

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset imagenet
# few shot
python exp_analyse.py --dataset imagenet --fewShot

SimCLR on long tail training datasets

# pre-train and finetune
./cmds/shell_scrips/imagenet-res50-LT.sh --data \path\to\imagenet -g 2 -p 4867 -w 10 --split imageNet_LT_exp_train

# evaluate pretrained model (after download and unzip the pretrained model)
./cmds/shell_scrips/imagenet-res50-LT.sh --data \path\to\imagenet -g 2 -p 4868 -w 10 --split imageNet_LT_exp_train --only_finetuning True --test_only True

# summery result (after "pre-train and finetune" or "evaluate pretrained model")
# linear separability
python exp_analyse.py --dataset imagenet --LT
# few shot
python exp_analyse.py --dataset imagenet --LT --fewShot

Citation

@inproceedings{
jiang2021self,
title={Self-Damaging Contrastive Learning},
author={Jiang, Ziyu and Chen, Tianlong and Mortazavi, Bobak and Wang, Zhangyang},
booktitle={International Conference on Machine Learning},
year={2021}
}
Owner
VITA
Visual Informatics Group @ University of Texas at Austin
VITA
Deep Learning Algorithms for Hedging with Frictions

Deep Learning Algorithms for Hedging with Frictions This repository contains the Forward-Backward Stochastic Differential Equation (FBSDE) solver and

Xiaofei Shi 3 Dec 22, 2022
Automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azure

fwhr-calc-website This project is to automatically measure the facial Width-To-Height ratio and get facial analysis results provided by Microsoft Azur

SoohyunPark 1 Feb 07, 2022
PyTorch implementation of HDN(Homography Decomposition Networks) for planar object tracking

Homography Decomposition Networks for Planar Object Tracking This project is the offical PyTorch implementation of HDN(Homography Decomposition Networ

CaptainHook 48 Dec 15, 2022
[ECCV2020] Content-Consistent Matching for Domain Adaptive Semantic Segmentation

[ECCV20] Content-Consistent Matching for Domain Adaptive Semantic Segmentation This is a PyTorch implementation of CCM. News: GTA-4K list is available

Guangrui Li 88 Aug 25, 2022
Official implementation of the Implicit Behavioral Cloning (IBC) algorithm

Implicit Behavioral Cloning This codebase contains the official implementation of the Implicit Behavioral Cloning (IBC) algorithm from our paper: Impl

Google Research 210 Dec 09, 2022
Minimal implementation and experiments of "No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging".

No-Transaction Band Network: A Neural Network Architecture for Efficient Deep Hedging Minimal implementation and experiments of "No-Transaction Band N

19 Jan 03, 2023
Unrolled Variational Bayesian Algorithm for Image Blind Deconvolution

unfoldedVBA Unrolled Variational Bayesian Algorithm for Image Blind Deconvolution This repository contains the Pytorch implementation of the unrolled

Yunshi HUANG 2 Jul 10, 2022
Image Captioning using CNN and Transformers

Image-Captioning Keras/Tensorflow Image Captioning application using CNN and Transformer as encoder/decoder. In particulary, the architecture consists

24 Dec 28, 2022
fklearn: Functional Machine Learning

fklearn: Functional Machine Learning fklearn uses functional programming principles to make it easier to solve real problems with Machine Learning. Th

nubank 1.4k Dec 07, 2022
Source code for the paper: Variance-Aware Machine Translation Test Sets (NeurIPS 2021 Datasets and Benchmarks Track)

Variance-Aware-MT-Test-Sets Variance-Aware Machine Translation Test Sets License See LICENSE. We follow the data licensing plan as the same as the WMT

NLP2CT Lab, University of Macau 5 Dec 21, 2021
Implementation of Shape and Electrostatic similarity metric in deepFMPO.

DeepFMPO v3D Code accompanying the paper "On the value of using 3D-shape and electrostatic similarities in deep generative methods". The paper can be

34 Nov 28, 2022
This repository provides code for "On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness".

On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness This repository provides the code for the paper On Interaction B

Meta Research 33 Dec 08, 2022
Auxiliary data to the CHIIR paper Searching to Learn with Instructional Scaffolding

Searching to Learn with Instructional Scaffolding This is the data and analysis code for the paper "Searching to Learn with Instructional Scaffolding"

Arthur Câmara 2 Mar 02, 2022
Mesh TensorFlow: Model Parallelism Made Easier

Mesh TensorFlow - Model Parallelism Made Easier Introduction Mesh TensorFlow (mtf) is a language for distributed deep learning, capable of specifying

1.3k Dec 26, 2022
Implementation for our ICCV 2021 paper: Dual-Camera Super-Resolution with Aligned Attention Modules

DCSR: Dual Camera Super-Resolution Implementation for our ICCV 2021 oral paper: Dual-Camera Super-Resolution with Aligned Attention Modules paper | pr

Tengfei Wang 110 Dec 20, 2022
Implementation of TimeSformer, a pure attention-based solution for video classification

TimeSformer - Pytorch Implementation of TimeSformer, a pure and simple attention-based solution for reaching SOTA on video classification.

Phil Wang 602 Jan 03, 2023
The authors' implementation of Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations

Unsupervised Adversarial Learning of 3D Human Pose from 2D Joint Locations This is the authors' implementation of Unsupervised Adversarial Learning of

Dwango Media Village 140 Dec 07, 2022
Ladder Variational Autoencoders (LVAE) in PyTorch

Ladder Variational Autoencoders (LVAE) PyTorch implementation of Ladder Variational Autoencoders (LVAE) [1]: where the variational distributions q at

Andrea Dittadi 63 Dec 22, 2022
An open-source online reverse dictionary.

An open-source online reverse dictionary.

THUNLP 6.3k Jan 09, 2023
Split your patch similarly to `git add -p` but supporting multiple buckets

split-patch.py This is git add -p on steroids for patches. Given a my.patch you can run ./split-patch.py my.patch You can choose in which bucket to p

102 Oct 06, 2022