Official Repository for the paper "Improving Baselines in the Wild".

Related tags

Deep Learningwilds
Overview

iWildCam and FMoW baselines (WILDS)

This repository was originally forked from the official repository of WILDS datasets (commit 7e103ed)

For general instructions, please refer to the original repositiory.

This repository contains code used to produce experimental results presented in:

Improving Baselines in the Wild

Apart from minor edits, the only main changes we introduce are:

  • --validate_every flag (default: 1000) to specify the frequency (number of training steps) of cross-validation/checkpoint tracking.
  • sub_val_metric option in the dataset (see examples/configs/datasets.py) to specify a secondary metric to be tracked during training. This activates additional cross-validation and checkpoint tracking for the specified metric.

Results

NB: To reproduce the numbers from the paper, the right PyTorch version must be used. All our experiments have been conducted using 1.9.0+cu102, except for + higher lr rows in Table 2/FMoW (which we ran for the camera-ready and for the public release) for which 1.10.0+cu102 was used.

The training scripts, logs, and model checkpoints for the best configurations from our experiments can be found here for iWildCam & FMoW.

iWildCam

CV based on "Valid F1"

Split / Metric mean (std) 3 runs
IID Valid Acc 82.5 (0.8) [0.817, 0.835, 0.822]
IID Valid F1 46.7 (1.0) [0.456, 0.481, 0.464]
IID Test Acc 76.2 (0.1) [0.762, 0.763, 0.761]
IID Test F1 47.9 (2.1) [0.505, 0.479, 0.453]
Valid Acc 64.1 (1.7) [0.644, 0.619, 0.661]
Valid F1 38.3 (0.9) [0.39, 0.371, 0.389]
Test Acc 69.0 (0.3) [0.69, 0.694, 0.687]
Test F1 32.1 (1.2) [0.338, 0.31, 0.314]

CV based on "Valid Acc"

Split / Metric mean (std) 3 runs
IID Valid Acc 82.6 (0.7) [0.836, 0.821, 0.822]
IID Valid F1 46.2 (0.9) [0.472, 0.45, 0.464]
IID Test Acc 75.8 (0.4) [0.76, 0.753, 0.761]
IID Test F1 44.9 (0.4) [0.444, 0.45, 0.453]
Valid Acc 66.6 (0.4) [0.666, 0.672, 0.661]
Valid F1 36.6 (2.1) [0.369, 0.339, 0.389]
Test Acc 68.6 (0.3) [0.688, 0.682, 0.687]
Test F1 28.7 (2.0) [0.279, 0.268, 0.314]

FMoW

CV based on "Valid Region"

Split / Metric mean (std) 3 runs
IID Valid Acc 63.9 (0.2) [0.64, 0.636, 0.641]
IID Valid Region 62.2 (0.5) [0.623, 0.616, 0.628]
IID Valid Year 49.8 (1.8) [0.52, 0.475, 0.5]
IID Test Acc 62.3 (0.2) [0.626, 0.621, 0.621]
IID Test Region 60.9 (0.6) [0.617, 0.603, 0.606]
IID Test Year 43.2 (1.1) [0.438, 0.417, 0.442]
Valid Acc 62.1 (0.0) [0.62, 0.621, 0.621]
Valid Region 52.5 (1.0) [0.538, 0.513, 0.524]
Valid Year 60.5 (0.2) [0.602, 0.605, 0.608]
Test Acc 55.6 (0.2) [0.555, 0.554, 0.558]
Test Region 34.8 (1.5) [0.369, 0.334, 0.34]
Test Year 50.2 (0.4) [0.499, 0.498, 0.508]

CV based on "Valid Acc"

Split / Metric mean (std) 3 runs
IID Valid Acc 64.0 (0.1) [0.641, 0.639, 0.641]
IID Valid Region 62.3 (0.4) [0.623, 0.617, 0.628]
IID Valid Year 50.8 (0.6) [0.514, 0.509, 0.5]
IID Test Acc 62.3 (0.4) [0.628, 0.62, 0.621]
IID Test Region 61.1 (0.6) [0.62, 0.608, 0.606]
IID Test Year 43.6 (1.4) [0.45, 0.417, 0.442]
Valid Acc 62.1 (0.0) [0.621, 0.621, 0.621]
Valid Region 51.4 (1.3) [0.522, 0.496, 0.524]
Valid Year 60.6 (0.3) [0.608, 0.601, 0.608]
Test Acc 55.6 (0.2) [0.556, 0.554, 0.558]
Test Region 34.2 (1.2) [0.357, 0.329, 0.34]
Test Year 50.2 (0.5) [0.496, 0.501, 0.508]

BibTex

@inproceedings{irie2021improving,
      title={Improving Baselines in the Wild}, 
      author={Kazuki Irie and Imanol Schlag and R\'obert Csord\'as and J\"urgen Schmidhuber},
      booktitle={Workshop on Distribution Shifts, NeurIPS},
      address={Virtual only},
      year={2021}
}
Owner
Kazuki Irie
Kazuki Irie
Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI

Hourglass Transformer - Pytorch (wip) Implementation of Hourglass Transformer, in Pytorch. It will also contain some of my own ideas about how to make

Phil Wang 61 Dec 25, 2022
Improving Calibration for Long-Tailed Recognition (CVPR2021)

MiSLAS Improving Calibration for Long-Tailed Recognition Authors: Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia [arXiv] [slide] [BibTeX] Introductio

Jia Research Lab 116 Dec 20, 2022
Export CenterPoint PonintPillars ONNX Model For TensorRT

CenterPoint-PonintPillars Pytroch model convert to ONNX and TensorRT Welcome to CenterPoint! This project is fork from tianweiy/CenterPoint. I impleme

CarkusL 149 Dec 13, 2022
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis

A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis Project Page | Paper A Shading-Guided Generative Implicit Model

Xingang Pan 115 Dec 18, 2022
Inference pipeline for our participation in the FeTA challenge 2021.

feta-inference Inference pipeline for our participation in the FeTA challenge 2021. Team name: TRABIT Installation Download the two folders in https:/

Lucas Fidon 2 Apr 13, 2022
FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection

FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection FCOSR: A Simple Anchor-free Rotated Detector for Aerial Object Detection arXi

59 Nov 29, 2022
[IROS'21] SurRoL: An Open-source Reinforcement Learning Centered and dVRK Compatible Platform for Surgical Robot Learning

SurRoL IROS 2021 SurRoL: An Open-source Reinforcement Learning Centered and dVRK Compatible Platform for Surgical Robot Learning Features dVRK compati

<a href=[email protected]"> 55 Jan 03, 2023
CLUES: Few-Shot Learning Evaluation in Natural Language Understanding

CLUES: Few-Shot Learning Evaluation in Natural Language Understanding This repo contains the data and source code for baseline models in the NeurIPS 2

Microsoft 29 Dec 29, 2022
BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond

BasicVSR BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond Ported from https://github.com/xinntao/BasicSR Dependencie

Holy Wu 8 Jun 07, 2022
A Number Recognition algorithm

Paddle-VisualAttention Results_Compared SVHN Dataset Methods Steps GPU Batch Size Learning Rate Patience Decay Step Decay Rate Training Speed (FPS) Ac

1 Nov 12, 2021
TensorFlow, PyTorch and Numpy layers for generating Orthogonal Polynomials

OrthNet TensorFlow, PyTorch and Numpy layers for generating multi-dimensional Orthogonal Polynomials 1. Installation 2. Usage 3. Polynomials 4. Base C

Chuan 29 May 25, 2022
A New Approach to Overgenerating and Scoring Abstractive Summaries

We provide the source code for the paper "A New Approach to Overgenerating and Scoring Abstractive Summaries" accepted at NAACL'21. If you find the code useful, please cite the following paper.

Kaiqiang Song 4 Apr 03, 2022
Codebase for the self-supervised goal reaching benchmark introduced in the LEXA paper

LEXA Benchmark Codebase for the self-supervised goal reaching benchmark introduced in the LEXA paper (Discovering and Achieving Goals via World Models

Oleg Rybkin 36 Dec 22, 2022
Official code for Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset

Official code for our Interspeech 2021 - Spoken ObjectNet: A Bias-Controlled Spoken Caption Dataset [1]*. Visually-grounded spoken language datasets c

Ian Palmer 3 Jan 26, 2022
A trashy useless Latin programming language written in python.

Codigum! The first programming langage in latin! (please keep your eyes closed when if you read the source code) It is pretty useless though. Document

Bic 2 Oct 25, 2021
Code repository for the paper "Doubly-Trained Adversarial Data Augmentation for Neural Machine Translation" with instructions to reproduce the results.

Doubly Trained Neural Machine Translation System for Adversarial Attack and Data Augmentation Languages Experimented: Data Overview: Source Target Tra

Steven Tan 1 Aug 18, 2022
PyTorch Implementation for AAAI'21 "Do Response Selection Models Really Know What's Next? Utterance Manipulation Strategies for Multi-turn Response Selection"

UMS for Multi-turn Response Selection Implements the model described in the following paper Do Response Selection Models Really Know What's Next? Utte

Taesun Whang 47 Nov 22, 2022
Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022)

Pop-Out Motion Pop-Out Motion: 3D-Aware Image Deformation via Learning the Shape Laplacian (CVPR 2022) Jihyun Lee*, Minhyuk Sung*, Hyunjin Kim, Tae-Ky

Jihyun Lee 88 Nov 22, 2022
GT China coal model

GT China coal model The full version of a China coal transport model with a very high spatial reslution. What it does The code works in a few steps: T

0 Dec 13, 2021
PyTorch Personal Trainer: My framework for deep learning experiments

Alex's PyTorch Personal Trainer (ptpt) (name subject to change) This repository contains my personal lightweight framework for deep learning projects

Alex McKinney 8 Jul 14, 2022