Understanding the Generalization Benefit of Model Invariance from a Data Perspective

Overview

Understanding the Generalization Benefit of Model Invariance from a Data Perspective

This is the code for our NeurIPS2021 paper "Understanding the Generalization Benefit of Model Invariance from a Data Perspective". There are two major parts in our code: sample covering number estimation and generalization benefit evaluation.

Requirments

  • Python 3.8
  • PyTorch
  • torchvision
  • scikit-learn-extra
  • scipy
  • robustness package (already included in our code)

Our code is based on robustness package.

Dataset

  • CIFAR-10 Download and extract the data into /data/cifar10
  • R2N2 Download the ShapeNet rendered images and put the data into /data/r2n2

The randomly sampled R2N2 images used for computing sample covering numbers and indices of examples for different sample sizes could be found here.

Estimation of sample covering numbers

To estimate the sample covering numbers of different data transformations, run the following script in /scn.

CUDA_VISIBLE_DEVICES=0 python run_scn.py  --epsilon 3 --transformation crop --cover_number_method fast --data-path /path/to/dataset 

Note that the input is a N x C x H x W tensor where N is sample size.

Evaluation of generalization benefit

To train the model with data augmentation method, run the following script in /learn_invariance for R2N2 dataset

CUDA_VISIBLE_DEVICES=0 python main.py \
    --dataset r2n2 \
    --data ../data/2n2/ShapeNetRendering \
    --metainfo-path ../data/r2n2/metainfo_all.json \
    --transforms view  \
    --inv-method aug \
    --out-dir /path/to/out_dir \
    --arch resnet18 --epoch 110 --lr 1e-2 --step-lr 50 \
    --workers 30 --batch-size 128 --exp-name view

or the following script for CIFAR-10 dataset

CUDA_VISIBLE_DEVICES=0 python main.py \
    --dataset cifar \
    --data ../data/cifar10 \
    --n-per-class all \
    --transforms crop  \
    --inv-method aug \
    --out-dir /path/to/out_dir \
    --arch resnet18 --epoch 110 --lr 1e-2 --step-lr 50 \
    --workers 30 --batch-size 128 --exp-name crop 

By setting --transforms to be one of {none, flip, crop, rotate, view}, the specific transformation will be considered.

To train the model with regularization method, run the following script. Currently, the code only support 3d-view transformation on R2N2 dataset.

CUDA_VISIBLE_DEVICES=0 python main.py \
    --dataset r2n2 \
    --data ../data/r2n2/ShapeNetRendering \
    --metainfo-path ../data/r2n2/metainfo_all.json \
    --transforms view  \
    --inv-method reg \
    --inv-method-beta 1 \
    --out-dir /path/to/out_dir \
    --arch resnet18 --epoch 110 --lr 1e-2 --step-lr 50 \
    --workers 30 --batch-size 128 --exp-name reg_view 

To evaluate the model with invariance loss and worst-case consistency accuracy, run the following script.

CUDA_VISIBLE_DEVICES=0 python main.py  \
    --dataset r2n2 \
    --data ../data/r2n2/ShapeNetRendering \
    --metainfo-path ../data/r2n2/metainfo_all.json \
    --inv-method reg \
    --arch resnet18 \
    --resume /path/to/checkpoint.pt.best \
    --eval-only 1 \
    --transforms view  \
    --adv-eval 0 \
    --batch-size 2  \
    --no-store 

Note that to have the worst-case consistency accuracy we need to load 24 view images in R2N2RenderingsTorch class in dataset_3d.py.

Owner
PhD student at University of Maryland
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon

42 Dec 02, 2022
official code for dynamic convolution decomposition

Revisiting Dynamic Convolution via Matrix Decomposition (ICLR 2021) A pytorch implementation of DCD. If you use this code in your research please cons

Yunsheng Li 110 Nov 23, 2022
Unofficial PyTorch implementation of Fastformer based on paper "Fastformer: Additive Attention Can Be All You Need"."

Fastformer-PyTorch Unofficial PyTorch implementation of Fastformer based on paper Fastformer: Additive Attention Can Be All You Need. Usage : import t

Hong-Jia Chen 126 Dec 06, 2022
Learning Dense Representations of Phrases at Scale (Lee et al., 2020)

DensePhrases DensePhrases provides answers to your natural language questions from the entire Wikipedia in real-time. While it efficiently searches th

Princeton Natural Language Processing 540 Dec 30, 2022
Tensors and neural networks in Haskell

Hasktorch Hasktorch is a library for tensors and neural networks in Haskell. It is an independent open source community project which leverages the co

hasktorch 920 Jan 04, 2023
Generative Query Network (GQN) in PyTorch as described in "Neural Scene Representation and Rendering"

Update 2019/06/24: A model trained on 10% of the Shepard-Metzler dataset has been added, the following notebook explains the main features of this mod

Jesper Wohlert 313 Dec 27, 2022
Blind visual quality assessment on 360° Video based on progressive learning

Blind visual quality assessment on omnidirectional or 360 video (ProVQA) Blind VQA for 360° Video via Progressively Learning from Pixels, Frames and V

5 Jan 06, 2023
Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research

Megaverse Megaverse is a new 3D simulation platform for reinforcement learning and embodied AI research. The efficient design of the engine enables ph

Aleksei Petrenko 191 Dec 23, 2022
Train a deep learning net with OpenStreetMap features and satellite imagery.

DeepOSM Classify roads and features in satellite imagery, by training neural networks with OpenStreetMap (OSM) data. DeepOSM can: Download a chunk of

TrailBehind, Inc. 1.3k Nov 24, 2022
Fantasy Points Prediction and Dream Team Formation

Fantasy-Points-Prediction-and-Dream-Team-Formation Collected Data from open source resources that have over 100 Parameters for predicting cricket play

Akarsh Singh 2 Sep 13, 2022
SatelliteNeRF - PyTorch-based Neural Radiance Fields adapted to satellite domain

SatelliteNeRF PyTorch-based Neural Radiance Fields adapted to satellite domain.

Kai Zhang 46 Nov 20, 2022
[UNMAINTAINED] Automated machine learning for analytics & production

auto_ml Automated machine learning for production and analytics Installation pip install auto_ml Getting started from auto_ml import Predictor from au

Preston Parry 1.6k Jan 02, 2023
Fast RFC3339 compliant Python date-time library

udatetime: Fast RFC3339 compliant date-time library Handling date-times is a painful act because of the sheer endless amount of formats used by people

Simon Pirschel 235 Oct 25, 2022
Paddle implementation for "Highly Efficient Knowledge Graph Embedding Learning with Closed-Form Orthogonal Procrustes Analysis" (NAACL 2021)

ProcrustEs-KGE Paddle implementation for Highly Efficient Knowledge Graph Embedding Learning with Orthogonal Procrustes Analysis 🙈 A more detailed re

Lincedo Lab 4 Jun 09, 2021
Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method

C++/ROS Source Codes for "Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method" published in IEEE Trans. Intelligent Transportation Systems

Bai Li 88 Dec 23, 2022
This project provides an unsupervised framework for mining and tagging quality phrases on text corpora with pretrained language models (KDD'21).

UCPhrase: Unsupervised Context-aware Quality Phrase Tagging To appear on KDD'21...[pdf] This project provides an unsupervised framework for mining and

Xiaotao Gu 146 Dec 22, 2022
Code for "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clouds", CVPR 2021

PV-RAFT This repository contains the PyTorch implementation for paper "PV-RAFT: Point-Voxel Correlation Fields for Scene Flow Estimation of Point Clou

Yi Wei 43 Dec 05, 2022
SuperSDR: multiplatform KiwiSDR + CAT transceiver integrator

SuperSDR SuperSDR integrates a realtime spectrum waterfall and audio receive from any KiwiSDR around the world, together with a local (or remote) cont

Marco Cogoni 30 Nov 29, 2022
Patch SVDD for Image anomaly detection

Patch SVDD Patch SVDD for Image anomaly detection. Paper: https://arxiv.org/abs/2006.16067 (published in ACCV 2020). Original Code : https://github.co

Hong-Jeongmin 0 Dec 03, 2021
Implementation for NeurIPS 2021 Submission: SparseFed

READ THIS FIRST This repo is an anonymized version of an existing repository of GitHub, for the AIStats 2021 submission: SparseFed: Mitigating Model P

2 Jun 15, 2022