PyTorch implementation of our paper How robust are discriminatively trained zero-shot learning models?

Overview

How robust are discriminatively trained zero-shot learning models?

This repository contains the PyTorch implementation of our paper How robust are discriminatively trained zero-shot learning models? published at Elsevier Image and Vision Computing.

Paper Highlights

In this paper, as a continuation of our previous work, we focus on the corruption robustness of discriminative ZSL models. Highlights of our paper is as follows.

  1. In order to facilitate the corruption robustness analyses, we curate and release the first benchmark datasets CUB-C, SUN-C and AWA2-C.
  2. We show that, compared to fully supervised settings, class imbalance and model strength are severe issues effecting the robustness behaviour of ZSL models.
  3. Combined with our previous work, we define and show the pseudo robustness effect, where absolute metrics may not always reflect the robustness behaviour of a model. This effect is present for adversarial examples, but not for corruptions.
  4. We show that recent augmentation methods designed for better corruption robustness can also increase the clean accuracy of ZSL models, and set new strong baselines.
  5. We show in detail that unseen and seen classes are affected disproportionately. We also show zero-shot and generalized zero-shot performances are affected differently.

Dataset Highlights

We release CUB-C, SUN-C and AWA2-C, which are corrupted versions of three popular ZSL benchmarks. Based on the previous work, we introduce several corruptions in various severities to test the generalization ability of ZSL models. More details on the design process and corruptions can be found in the paper.

Repository Contents and Requirements

This repository contains the code to reproduce our results and the necessary scripts to generate the corruption datasets. You should follow the below steps before running the code.

  • You can use the provided environment yml (or pip requirements.txt) file to install dependencies.
  • Download the pretrained models here and place them under /model folders.
  • Download AWA2, SUN and CUB datasets. Please note we operate on raw images, not the features provided with the datasets.
  • Download the data split/attribute files here and extract the contents into /data folder.
  • Change the necessary paths in the json file.

The code in this repository lets you evaluate our provided models with AWA2, CUB-C and SUN-C. If you want to use corruption datasets, you can take generate_corruption.py file and use it in your own project.

Additional Content

In addition to the paper, we release our supplementary file supp.pdf. It includes the following.

1. Average errors (ZSL and GZSL) for each dataset per corruption category. These are for the ALE model, and should be used to weight the errors when calculating mean corruption errors. For comparison, this essentially replaces AlexNet error weighting used for ImageNet-C dataset.

2. Mean corruption errors (ZSL and GZSL) of the ALE model, for seen/unseen/harmonic and ZSL top-1 accuracies, on each dataset. These results include the MCE values for original ALE and ALE with five defense methods used in our paper (i.e. total-variance minimization, spatial smoothing, label smoothing, AugMix and ANT). These values can be used as baseline scores when comparing the robustness of your method.

Running the code

After you've downloaded the necessary dataset files, you can run the code by simply

python run.py

For changing the experimental parameters, refer to params.json file. Details on json file parameters can be found in the code. By default, running run.py looks for a params.json file in the folder. If you want to run the code with another json file, use

python run.py --json_path path_to_json

Citation

If you find our code or paper useful in your research, please consider citing the following papers.

@inproceedings{yucel2020eccvw,
  title={A Deep Dive into Adversarial Robustness in Zero-Shot Learning},
  author={Yucel, Mehmet Kerim and Cinbis, Ramazan Gokberk and Duygulu, Pinar},
  booktitle = {ECCV Workshop on Adversarial Robustness in the Real World}
  pages={3--21},
  year={2020},
  organization={Springer}
}

@article{yucel2022imavis,
title = {How robust are discriminatively trained zero-shot learning models?},
journal = {Image and Vision Computing},
pages = {104392},
year = {2022},
issn = {0262-8856},
doi = {https://doi.org/10.1016/j.imavis.2022.104392},
url = {https://www.sciencedirect.com/science/article/pii/S026288562200021X},
author = {Mehmet Kerim Yucel and Ramazan Gokberk Cinbis and Pinar Duygulu},
keywords = {Zero-shot learning, Robust generalization, Adversarial robustness},
}

Acknowledgements

This code base has borrowed several implementations from here, here and it is a continuation of our previous work's repository.

Owner
Mehmet Kerim Yucel
Mehmet Kerim Yucel
Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP"

DiLBERT Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP" Pretrained Model The pretrained model presented in the paper is

Kevin Roitero 2 Dec 15, 2022
Temporal Segment Networks (TSN) in PyTorch

TSN-Pytorch We have released MMAction, a full-fledged action understanding toolbox based on PyTorch. It includes implementation for TSN as well as oth

1k Jan 03, 2023
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
A Convolutional Transformer for Keyword Spotting

☢️ Audiomer ☢️ Audiomer: A Convolutional Transformer for Keyword Spotting [ arXiv ] [ Previous SOTA ] [ Model Architecture ] Results on SpeechCommands

49 Jan 27, 2022
[ICML 2021] “ Self-Damaging Contrastive Learning”, Ziyu Jiang, Tianlong Chen, Bobak Mortazavi, Zhangyang Wang

Self-Damaging Contrastive Learning Introduction The recent breakthrough achieved by contrastive learning accelerates the pace for deploying unsupervis

VITA 51 Dec 29, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

2.6k Jan 04, 2023
Open-Domain Question-Answering for COVID-19 and Other Emergent Domains

Open-Domain Question-Answering for COVID-19 and Other Emergent Domains This repository contains the source code for an end-to-end open-domain question

7 Sep 27, 2022
Plug and play transformer you can find network structure and official complete code by clicking List

Plug-and-play Module Plug and play transformer you can find network structure and official complete code by clicking List The following is to quickly

8 Mar 27, 2022
A set of tools for Namebase and HNS

HNS-TOOLS A set of tools for Namebase and HNS To install: pip install -r requirements.txt To run: py main.py My Namebase referral code: http://namebas

RunDavidMC 7 Apr 08, 2022
Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX.

ONNX Object Localization Network Python scripts performing class agnostic object localization using the Object Localization Network model in ONNX. Ori

Ibai Gorordo 15 Oct 14, 2022
Code for the Population-Based Bandits Algorithm, presented at NeurIPS 2020.

Population-Based Bandits (PB2) Code for the Population-Based Bandits (PB2) Algorithm, from the paper Provably Efficient Online Hyperparameter Optimiza

Jack Parker-Holder 22 Nov 16, 2022
Code and data of the Fine-Grained R2R Dataset proposed in paper Sub-Instruction Aware Vision-and-Language Navigation

Fine-Grained R2R Code and data of the Fine-Grained R2R Dataset proposed in the EMNLP2020 paper Sub-Instruction Aware Vision-and-Language Navigation. C

YicongHong 34 Nov 15, 2022
PyTorch code for the NAACL 2021 paper "Improving Generation and Evaluation of Visual Stories via Semantic Consistency"

Improving Generation and Evaluation of Visual Stories via Semantic Consistency PyTorch code for the NAACL 2021 paper "Improving Generation and Evaluat

Adyasha Maharana 28 Dec 08, 2022
Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch

Segformer - Pytorch Implementation of Segformer, Attention + MLP neural network for segmentation, in Pytorch. Install $ pip install segformer-pytorch

Phil Wang 208 Dec 25, 2022
Convenient tool for speeding up the intern/officer review process.

icpc-app-screen Convenient tool for speeding up the intern/officer applicant review process. Eliminates the pain from reading application responses of

1 Oct 30, 2021
Unofficial PyTorch implementation of TokenLearner by Google AI

tokenlearner-pytorch Unofficial PyTorch implementation of TokenLearner by Ryoo et al. from Google AI (abs, pdf) Installation You can install TokenLear

Rishabh Anand 46 Dec 20, 2022
Source code for The Power of Many: A Physarum Swarm Steiner Tree Algorithm

Physarum-Swarm-Steiner-Algo Source code for The Power of Many: A Physarum Steiner Tree Algorithm Code implements ideas from the following papers: Sher

Sheryl Hsu 2 Mar 28, 2022
Source code for Fathony, Sahu, Willmott, & Kolter, "Multiplicative Filter Networks", ICLR 2021.

Multiplicative Filter Networks This repository contains a PyTorch MFN implementation and code to perform & reproduce experiments from the ICLR 2021 pa

Bosch Research 66 Jan 04, 2023
AoT is a system for automatically generating off-target test harness by using build information.

AoT: Auto off-Target Automatically generating off-target test harness by using build information. Brought to you by the Mobile Security Team at Samsun

Samsung 10 Oct 19, 2022
Multiband spectro-radiometric satellite image analysis with K-means cluster algorithm

Multi-band Spectro Radiomertric Image Analysis with K-means Cluster Algorithm Overview Multi-band Spectro Radiomertric images are images comprising of

Chibueze Henry 6 Mar 16, 2022