Contrastive Learning with Non-Semantic Negatives

Overview

Contrastive Learning with Non-Semantic Negatives

This repository is the official implementation of Robust Contrastive Learning Using Negative Samples with Diminished Semantics. Contrastive learning utilizes positive pairs which preserve semantic information while perturbing superficial features in the training images. Similarly, we propose to generate negative samples to make the model more robust, where only the superfluous instead of the semantic features are preserved.

Preparation

Install PyTorch and check preprocess/ for ImageNet-100 and ImageNet-Texture preprocessing scripts.

Training

The following code is used to pre-train MoCo-v2 + patch / texture-based NS. The major code is developed with minimal modifications from the official implementation.

python moco-non-sem-neg.py -a resnet50 --lr 0.03 --batch-size 128 --dist-url 'tcp://localhost:10001' \
  --multiprocessing-distributed --world-size 1 --rank 0 \
  --mlp --moco-t 0.2 --aug-plus --cos --moco-k 16384 \
  --robust nonsem --num-nonsem 1 --alpha 2 --epochs 200 --patch-ratio 16 72 \
  --ckpt_path ./ckpts/mocov2_mocok16384_bs128_lr0.03_nonsem_16_72_noaug_nn1_alpha2_epoch200  \
  /path/to/imagenet-100/ 

python moco-non-sem-neg.py -a resnet50 --lr 0.03 --batch-size 128 --dist-url 'tcp://localhost:10001' \
  --multiprocessing-distributed --world-size 1 --rank 0 \
  --mlp --moco-t 0.2 --aug-plus --cos --moco-k 16384 \
  --robust texture_syn --num-nonsem 1 --alpha 2 --epochs 200 \
  --ckpt_path ./ckpts/mocov2_mocok16384_bs128_lr0.03_texture_nn1_alpha2_epoch200 \
  /path/to/imagenet-100-texture/ 
  • Change /path/to/imagenet-100/ with the ImageNet-100 dataset directory.
  • Change --alpha and -moco-k to reproduce results with different configurations.

Linear Evaluation

Run following code is used to reproduce MoCo-v2 + patch-based NS model reported in Table 1.

python main_lincls.py -a resnet50 --lr 10.0 --batch-size 128 --epochs 60 \
  --pretrained ./ckpts/mocov2_mocok16384_bs128_lr0.03_nonsem_16_72_noaug_nn1_alpha2_epoch200/checkpoint_0199.pth.tar \
  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
  --ckpt_path ./ckpts/mocov2_mocok16384_bs128_lr0.03_nonsem_16_72_noaug_nn1_alpha2_epoch200 \
  /path/to/imagenet-100/ 

Pre-trained Models

You can download pretrained models here:

moco-k alpha ImageNet-100 Corruption Sketch Stylized Rendition Checkpoints
MoCo-v2 16384 - 77.88±0.28 43.08±0.27 28.24±0.58 16.20±0.55 32.92±0.12 Run1, Run2, Run3
+ Texture 16384 2 77.76±0.17 43.58±0.33 29.11±0.39 16.59±0.17 33.36±0.15 Run1, Run2, Run3
+ Patch 16384 2 79.35±0.12 45.13±0.35 31.76±0.88 17.37±0.19 34.78±0.15 Run1, Run2, Run3
+ Patch 16384 3 75.58±0.52 44.45±0.15 34.03±0.58 18.60±0.26 36.89±0.11 Run1, Run2, Run3
MoCo-v2 8192 - 77.73±0.38 43.22±0.39 28.45±0.36 16.83±0.12 33.19±0.44 Run1, Run2, Run3
+ Patch 8192 2 79.54±0.32 45.48±0.20 33.36±0.45 17.81±0.32 36.31±0.37 Run1, Run2, Run3
Official code for "On the Frequency Bias of Generative Models", NeurIPS 2021

Frequency Bias of Generative Models Generator Testbed Discriminator Testbed This repository contains official code for the paper On the Frequency Bias

35 Nov 01, 2022
なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモ

FaceDetection-Anti-Spoof-Demo なりすまし検出(anti-spoof-mn3)のWebカメラ向けデモです。 モデルはPINTO_model_zoo/191_anti-spoof-mn3からONNX形式のモデルを使用しています。 Requirement mediapipe

KazuhitoTakahashi 8 Nov 18, 2022
Codes and Data Processing Files for our paper.

Code Scripts and Processing Files for EEG Sleep Staging Paper 1. Folder Tree ./src_preprocess (data preprocessing files for SHHS and Sleep EDF) sleepE

Chaoqi Yang 18 Dec 12, 2022
source code for 'Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge' by A. Shah, K. Shanmugam, K. Ahuja

Source code for "Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge" Reference: Abhin Shah, Karthikeyan Shanmugam, Kartik Ahu

Abhin Shah 1 Jun 03, 2022
Implementation of the paper Recurrent Glimpse-based Decoder for Detection with Transformer.

REGO-Deformable DETR By Zhe Chen, Jing Zhang, and Dacheng Tao. This repository is the implementation of the paper Recurrent Glimpse-based Decoder for

Zhe Chen 33 Nov 30, 2022
TensorFlow implementation of original paper : https://github.com/hszhao/PSPNet

Keras implementation of PSPNet(caffe) Implemented Architecture of Pyramid Scene Parsing Network in Keras. For the best compability please use Python3.

VladKry 386 Dec 29, 2022
Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet

Attack classification models with transferability, black-box attack; unrestricted adversarial attacks on imagenet, CVPR2021 安全AI挑战者计划第六期:ImageNet无限制对抗攻击 决赛第四名(team name: Advers)

51 Dec 01, 2022
Code and real data for the paper "Counterfactual Temporal Point Processes", available at arXiv.

counterfactual-tpp This is a repository containing code and real data for the paper Counterfactual Temporal Point Processes. Pre-requisites This code

Networks Learning 11 Dec 09, 2022
A privacy-focused, intelligent security camera system.

Self-Hosted Home Security Camera System A privacy-focused, intelligent security camera system. Features: Multi-camera support w/ minimal configuration

Scott Barnes 175 Jan 01, 2023
Real-Time Social Distance Monitoring tool using Computer Vision

Social Distance Detector A Real-Time Social Distance Monitoring Tool Table of Contents Motivation YOLO Theory Detection Output Tech Stack Functionalit

Pranav B 13 Oct 14, 2022
A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.

WILDS is a benchmark of in-the-wild distribution shifts spanning diverse data modalities and applications, from tumor identification to wildlife monitoring to poverty mapping.

P-Lambda 437 Dec 30, 2022
Regression Metrics Calculation Made easy for tensorflow2 and scikit-learn

Regression Metrics Installation To install the package from the PyPi repository you can execute the following command: pip install regressionmetrics I

Ashish Patel 11 Dec 16, 2022
Official implementation of VaxNeRF (Voxel-Accelearated NeRF).

VaxNeRF Paper | Google Colab This is the official implementation of VaxNeRF (Voxel-Accelearated NeRF). VaxNeRF provides very fast training and slightl

naruya 132 Nov 21, 2022
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.

Note: This is an alpha (preview) version which is still under refining. nn-Meter is a novel and efficient system to accurately predict the inference l

Microsoft 244 Jan 06, 2023
Unsupervised Image Generation with Infinite Generative Adversarial Networks

Unsupervised Image Generation with Infinite Generative Adversarial Networks Here is the implementation of MICGANs using DCGAN architecture on MNIST da

16 Dec 24, 2021
TensorFlow Metal Backend on Apple Silicon Experiments (just for fun)

tf-metal-experiments TensorFlow Metal Backend on Apple Silicon Experiments (just for fun) Setup This is tested on M1 series Apple Silicon SOC only. Te

Timothy Liu 161 Jan 03, 2023
Lucid Sonic Dreams syncs GAN-generated visuals to music.

Lucid Sonic Dreams Lucid Sonic Dreams syncs GAN-generated visuals to music. By default, it uses NVLabs StyleGAN2, with pre-trained models lifted from

731 Jan 02, 2023
PyTorch implementation of normalizing flow models

PyTorch implementation of normalizing flow models

Vincent Stimper 242 Jan 02, 2023
PyTorch implementation of the wavelet analysis from Torrence & Compo

Continuous Wavelet Transforms in PyTorch This is a PyTorch implementation for the wavelet analysis outlined in Torrence and Compo (BAMS, 1998). The co

Tom Runia 262 Dec 21, 2022
Lepard: Learning Partial point cloud matching in Rigid and Deformable scenes

Lepard: Learning Partial point cloud matching in Rigid and Deformable scenes [Paper] Method overview 4DMatch Benchmark 4DMatch is a benchmark for matc

103 Jan 06, 2023