PAWS 🐾 Predicting View-Assignments with Support Samples

Related tags

Deep Learningsuncet
Overview

PAWS 🐾 Predicting View-Assignments with Support Samples

This repo provides a PyTorch implementation of PAWS (predicting view assignments with support samples), as described in the paper Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples.

CD21_260_SWAV2_PAWS_Flowchart_FINAL

PAWS is a method for semi-supervised learning that builds on the principles of self-supervised distance-metric learning. PAWS pre-trains a model to minimize a consistency loss, which ensures that different views of the same unlabeled image are assigned similar pseudo-labels. The pseudo-labels are generated non-parametrically, by comparing the representations of the image views to those of a set of randomly sampled labeled images. The distance between the view representations and labeled representations is used to provide a weighting over class labels, which we interpret as a soft pseudo-label. By non-parametrically incorporating labeled samples in this way, PAWS extends the distance-metric loss used in self-supervised methods such as BYOL and SwAV to the semi-supervised setting.

Also provided in this repo is a PyTorch implementation of the semi-supervised SimCLR+CT method described in the paper Supervision Accelerates Pretraining in Contrastive Semi-Supervised Learning of Visual Representations. SimCLR+CT combines the SimCLR self-supervised loss with the SuNCEt (supervised noise contrastive estimation) loss for semi-supervised learning.

Pretrained models

We provide the full checkpoints for the PAWS pre-trained models, both with and without fine-tuning. The full checkpoints for the pretrained models contain the backbone, projection head, and prediction head weights. The finetuned model checkpoints, on the other hand, only include the backbone and linear classifier head weights. Top-1 classification accuracy for the pretrained models is reported using a nearest neighbour classifier. Top-1 classification accuracy for the finetuned models is reported using the class labels predicted by the network's last linear layer.

1% labels 10% labels
epochs network pretrained (NN) finetuned pretrained (NN) finetuned
300 RN50 65.4% 66.5% 73.1% 75.5%
200 RN50 64.6% 66.1% 71.9% 75.0%
100 RN50 62.6% 63.8% 71.0% 73.9%

Running PAWS semi-supervised pre-training and fine-tuning

Config files

All experiment parameters are specified in config files (as opposed to command-line-arguments). Config files make it easier to keep track of different experiments, as well as launch batches of jobs at a time. See the configs/ directory for example config files.

Requirements

  • Python 3.8
  • PyTorch install 1.7.1
  • torchvision
  • CUDA 11.0
  • Apex with CUDA extension
  • Other dependencies: PyYaml, numpy, opencv, submitit

Labeled Training Splits

For reproducibilty, we have pre-specified the labeled training images as .txt files in the imagenet_subsets/ and cifar10_subsets/ directories. Based on your specifications in your experiment's config file, our implementation will automatically use the images specified in one of these .txt files as the set of labeled images. On ImageNet, if you happen to request a split of the data that is not contained in imagenet_subsets/ (for example, if you set unlabeled_frac !=0.9 and unlabeled_frac != 0.99, i.e., not 10% labeled or 1% labeled settings), then the code will independently flip a coin at the start of training for each training image with probability 1-unlabeled_frac to determine whether or not to keep the image's label.

Single-GPU training

PAWS is very simple to implement and experiment with. Our implementation starts from the main.py, which parses the experiment config file and runs the desired script (e.g., paws pre-training or fine-tuning) locally on a single GPU.

CIFAR10 pre-training

For example, to pre-train with PAWS on CIFAR10 locally, using a single GPU using the pre-training experiment configs specificed inside configs/paws/cifar10_train.yaml, run:

python main.py
  --sel paws_train
  --fname configs/paws/cifar10_train.yaml

CIFAR10 evaluation

To fine-tune the pre-trained model for a few optimization steps with the SuNCEt (supervised noise contrastive estimation) loss on a single GPU using the pre-training experiment configs specificed inside configs/paws/cifar10_snn.yaml, run:

python main.py
  --sel snn_fine_tune
  --fname configs/paws/cifar10_snn.yaml

To then evaluate the nearest-neighbours performance of the model, locally, on a single GPU, run:

python snn_eval.py
  --model-name wide_resnet28w2 --use-pred
  --pretrained $path_to_pretrained_model
  --unlabeled_frac $1.-fraction_of_labeled_train_data_to_support_nearest_neighbour_classification
  --root-path $path_to_root_datasets_directory
  --image-folder $image_directory_inside_root_path
  --dataset-name cifar10_fine_tune
  --split-seed $which_prespecified_seed_to_split_labeled_data

Multi-GPU training

Running PAWS across multiple GPUs on a cluster is also very simple. In the multi-GPU setting, the implementation starts from main_distributed.py, which, in addition to parsing the config file and launching the desired script, also allows for specifying details about distributed training. For distributed training, we use the popular open-source submitit tool and provide examples for a SLURM cluster, but feel free to edit main_distributed.py for your purposes to specify a different approach to launching a multi-GPU job on a cluster.

ImageNet pre-training

For example, to pre-train with PAWS on 64 GPUs using the pre-training experiment configs specificed inside configs/paws/imgnt_train.yaml, run:

python main_distributed.py
  --sel paws_train
  --fname configs/paws/imgnt_train.yaml
  --partition $slurm_partition
  --nodes 8 --tasks-per-node 8
  --time 1000
  --device volta16gb

ImageNet fine-tuning

To fine-tune a pre-trained model on 4 GPUs using the fine-tuning experiment configs specified inside configs/paws/fine_tune.yaml, run:

python main_distributed.py
  --sel fine_tune
  --fname configs/paws/fine_tune.yaml
  --partition $slurm_partition
  --nodes 1 --tasks-per-node 4
  --time 1000
  --device volta16gb

To evaluate the fine-tuned model locally on a single GPU, use the same config file, configs/paws/fine_tune.yaml, but change training: true to training: false. Then run:

python main.py
  --sel fine_tune
  --fname configs/paws/fine_tune.yaml

Soft Nearest Neighbours evaluation

To evaluate the nearest-neighbours performance of a pre-trained ResNet50 model on a single GPU, run:

python snn_eval.py
  --model-name resnet50 --use-pred
  --pretrained $path_to_pretrained_model
  --unlabeled_frac $1.-fraction_of_labeled_train_data_to_support_nearest_neighbour_classification
  --root-path $path_to_root_datasets_directory
  --image-folder $image_directory_inside_root_path
  --dataset-name $one_of:[imagenet_fine_tune, cifar10_fine_tune]

License

See the LICENSE file for details about the license under which this code is made available.

Citation

If you find this repository useful in your research, please consider giving a star and a citation 🐾

@article{assran2021semisupervised,
  title={Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples}, 
  author={Assran, Mahmoud, and Caron, Mathilde, and Misra, Ishan, and Bojanowski, Piotr and Joulin, Armand, and Ballas, Nicolas, and Rabbat, Michael},
  journal={arXiv preprint arXiv:2104.13963},
  year={2021}
}
@article{assran2020supervision,
  title={Supervision Accelerates Pretraining in Contrastive Semi-Supervised Learning of Visual Representations},
  author={Assran, Mahmoud, and Ballas, Nicolas, and Castrejon, Lluis, and Rabbat, Michael},
  journal={arXiv preprint arXiv:2006.10803},
  year={2020}
}
Owner
Facebook Research
Facebook Research
Source code for paper "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling", AAAI 2021

ATLOP Code for AAAI 2021 paper Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling. If you make use of this co

Wenxuan Zhou 146 Nov 29, 2022
Lbl2Vec learns jointly embedded label, document and word vectors to retrieve documents with predefined topics from an unlabeled document corpus.

Lbl2Vec Lbl2Vec is an algorithm for unsupervised document classification and unsupervised document retrieval. It automatically generates jointly embed

sebis - TUM - Germany 61 Dec 20, 2022
A visualization tool to show a TensorFlow's graph like TensorBoard

tfgraphviz tfgraphviz is a module to visualize a TensorFlow's data flow graph like TensorBoard using Graphviz. tfgraphviz enables to provide a visuali

44 Nov 09, 2022
Learning-Augmented Dynamic Power Management

Learning-Augmented Dynamic Power Management This repository contains source code accompanying paper Learning-Augmented Dynamic Power Management with M

Adam 0 Feb 22, 2022
A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion

A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion This repo intends to release code for our work: Zhaoyang Lyu*, Zhifeng

Zhaoyang Lyu 68 Jan 03, 2023
Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Finetuner allows one to tune the weights of any deep neural network for better embeddings on search tasks

Jina AI 794 Dec 31, 2022
Code repository for the paper: Hierarchical Kinematic Probability Distributions for 3D Human Shape and Pose Estimation from Images in the Wild (ICCV 2021)

Hierarchical Kinematic Probability Distributions for 3D Human Shape and Pose Estimation from Images in the Wild Akash Sengupta, Ignas Budvytis, Robert

Akash Sengupta 149 Dec 14, 2022
Defending against Model Stealing via Verifying Embedded External Features

Defending against Model Stealing Attacks via Verifying Embedded External Features This is the official implementation of our paper Defending against M

20 Dec 30, 2022
A complete end-to-end demonstration in which we collect training data in Unity and use that data to train a deep neural network to predict the pose of a cube. This model is then deployed in a simulated robotic pick-and-place task.

Object Pose Estimation Demo This tutorial will go through the steps necessary to perform pose estimation with a UR3 robotic arm in Unity. You’ll gain

Unity Technologies 187 Dec 24, 2022
Multi-Task Learning as a Bargaining Game

Nash-MTL Official implementation of "Multi-Task Learning as a Bargaining Game". Setup environment conda create -n nashmtl python=3.9.7 conda activate

Aviv Navon 87 Dec 26, 2022
Secure Distributed Training at Scale

Secure Distributed Training at Scale This repository contains the implementation of experiments from the paper "Secure Distributed Training at Scale"

Yandex Research 9 Jul 11, 2022
Official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution"

RealBasicVSR [Paper] This is the official repository of "Investigating Tradeoffs in Real-World Video Super-Resolution, arXiv". This repository contain

Kelvin C.K. Chan 566 Dec 28, 2022
Code release for ConvNeXt model

A ConvNet for the 2020s Official PyTorch implementation of ConvNeXt, from the following paper: A ConvNet for the 2020s. arXiv 2022. Zhuang Liu, Hanzi

Meta Research 4.6k Jan 08, 2023
Pixel-Perfect Structure-from-Motion with Featuremetric Refinement (ICCV 2021, Oral)

Pixel-Perfect Structure-from-Motion (ICCV 2021 Oral) We introduce a framework that improves the accuracy of Structure-from-Motion by refining keypoint

Computer Vision and Geometry Lab 831 Dec 29, 2022
Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Junxian He 57 Jan 01, 2023
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners

DART Implementation for ICLR2022 paper Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners. Environment

ZJUNLP 83 Dec 27, 2022
PyTorch implementation DRO: Deep Recurrent Optimizer for Structure-from-Motion

DRO: Deep Recurrent Optimizer for Structure-from-Motion This is the official PyTorch implementation code for DRO-sfm. For technical details, please re

Alibaba Cloud 56 Dec 12, 2022
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
Analysis of rationale selection in neural rationale models

Neural Rationale Interpretability Analysis We analyze the neural rationale models proposed by Lei et al. (2016) and Bastings et al. (2019), as impleme

Yiming Zheng 3 Aug 31, 2022
An abstraction layer for mathematical optimization solvers.

MathOptInterface Documentation Build Status Social An abstraction layer for mathematical optimization solvers. Replaces MathProgBase. Citing MathOptIn

JuMP-dev 284 Jan 04, 2023