modelvshuman is a Python library to benchmark the gap between human and machine vision

Overview

modelvshuman: Does your model generalise better than humans?

modelvshuman is a Python library to benchmark the gap between human and machine vision. Using this library, both PyTorch and TensorFlow models can be evaluated on 17 out-of-distribution datasets with high-quality human comparison data.

πŸ† Benchmark

The top-10 models are listed here; training dataset size is indicated in brackets. Additionally, standard ResNet-50 is included as the last entry of the table for comparison. Model ranks are calculated across the full range of 52 models that we tested. If your model scores better than some (or even all) of the models here, please open a pull request and we'll be happy to include it here!

Most human-like behaviour

winner model accuracy difference ↓ observed consistency ↑ error consistency ↑ mean rank ↓
πŸ₯‡ CLIP: ViT-B (400M) .023 .758 .281 1
πŸ₯ˆ SWSL: ResNeXt-101 (940M) .028 .752 .237 3.67
πŸ₯‰ BiT-M: ResNet-101x1 (14M) .034 .733 .252 4
πŸ‘ BiT-M: ResNet-152x2 (14M) .035 .737 .243 4.67
πŸ‘ ViT-L (1M) .033 .738 .222 6.67
πŸ‘ BiT-M: ResNet-152x4 (14M) .035 .732 .233 7.33
πŸ‘ BiT-M: ResNet-50x1 (14M) .042 .718 .240 9
πŸ‘ BiT-M: ResNet-50x3 (14M) .040 .726 .228 9
πŸ‘ ViT-L (14M) .035 .744 .206 9.67
πŸ‘ SWSL: ResNet-50 (940M) .041 .727 .211 11.33
... standard ResNet-50 (1M) .087 .665 .208 29

Highest out-of-distribution robustness

winner model OOD accuracy ↑ rank ↓
πŸ₯‡ ViT-L (14M) .733 1
πŸ₯ˆ CLIP: ViT-B (400M) .708 2
πŸ₯‰ ViT-L (1M) .706 3
πŸ‘ SWSL: ResNeXt-101 (940M) .698 4
πŸ‘ BiT-M: ResNet-152x2 (14M) .694 5
πŸ‘ BiT-M: ResNet-152x4 (14M) .688 6
πŸ‘ BiT-M: ResNet-101x3 (14M) .682 7
πŸ‘ BiT-M: ResNet-50x3 (14M) .679 8
πŸ‘ SimCLR: ResNet-50x4 (1M) .677 9
πŸ‘ SWSL: ResNet-50 (940M) .677 10
... standard ResNet-50 (1M) .559 31

πŸ”§ Installation

Simply clone the repository to a location of your choice and follow these steps:

  1. Set the repository home path by running the following from the command line:

    export MODELVSHUMANDIR=/absolute/path/to/this/repository/
    
  2. Install package (remove the -e option if you don't intend to add your own model or make any other changes)

    pip install -e .
    

πŸ”¬ User experience

Simply edit examples/evaluate.py as desired. This will test a list of models on out-of-distribution datasets, generating plots. If you then compile latex-report/report.tex, all the plots will be included in one convenient PDF report.

🐫 Model zoo

The following models are currently implemented:

If you e.g. add/implement your own model, please make sure to compute the ImageNet accuracy as a sanity check.

How to load a model

If you just want to load a model from the model zoo, this is what you can do:

    # loading a PyTorch model from the zoo
    from modelvshuman.models.pytorch.model_zoo import InfoMin
    model = InfoMin("InfoMin")

    # loading a Tensorflow model from the zoo
    from modelvshuman.models.tensorflow.model_zoo import efficientnet_b0
    model = efficientnet_b0("efficientnet_b0")
How to list all available models

All implemented models are registered by the model registry, which can then be used to list all available models of a certain framework with the following method:

    from modelvshuman import models
    
    print(models.list_models("pytorch"))
    print(models.list_models("tensorflow"))
How to add a new model

Adding a new model is possible for standard PyTorch and TensorFlow models. Depending on the framework (pytorch / tensorflow), open modelvshuman/models//model_zoo.py. Here, you can add your own model with a few lines of code - similar to how you would load it usually. If your model has a custom model definition, create a new subdirectory called modelvshuman/models//my_fancy_model/fancy_model.py which you can then import from model_zoo.py via from .my_fancy_model import fancy_model.

πŸ“ Datasets

In total, 17 datasets with human comparison data collected under highly controlled laboratory conditions are available.

Twelve datasets correspond to parametric or binary image distortions. Top row: colour/grayscale, contrast, high-pass, low-pass (blurring), phase noise, power equalisation. Bottom row: opponent colour, rotation, Eidolon I, II and III, uniform noise. noise-stimuli

The remaining five datasets correspond to the following nonparametric image manipulations: sketch, stylized, edge, silhouette, texture-shape cue conflict. nonparametric-stimuli

How to load a dataset

Similarly, if you're interested in just loading a dataset, you can do this via:

   from modelvshuman.datasets import sketch      
   dataset = sketch(batch_size=16, num_workers=4)
How to list all available datasets
    from modelvshuman import datasets
    
    print(list(datasets.list_datasets().keys()))

πŸ’³ Credit

We collected psychophysical data ourselves, but we used existing image dataset sources. 12 datasets were obtained from Generalisation in humans and deep neural networks. 3 datasets were obtained from ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness. Additionally, we used 1 dataset from Learning Robust Global Representations by Penalizing Local Predictive Power (sketch images from ImageNet-Sketch) and 1 dataset from ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness (stylized images from Stylized-ImageNet).

We thank all model authors and repository maintainers for providing the models described above.

Comments
  • Question about self-supervised models

    Question about self-supervised models

    Hello, thanks for the great toolbox~

    I'm little confusion about the results on self-supervised models, like SimCLR. It doesn't have the class-specifical classifier on ImageNet-1k. So how do you load the classifier weight? load an extra-trained classifier (like linear-probing protocol) or a fully-trained network (like end-to-end fine-tuning protocol)?

    And another question: If I just want to test the shape & texture accuracy, mentioned in Intriguing Properties of Vision Transformers, which dataset type should I choose?

    Thank you very much~

    opened by pengzhiliang 5
  • dataset path?

    dataset path?

    Hi --- first thank you a lot for this useful dataset and API!

    I'm trying to load the datasets using the following code, but got an error msg saying dataset sketch path not found: model-vs-human/datasets/sketch/dnn/

    Does this mean that I should download the sketch image dataset myself from the original source and put under this path? Little more documentation on how the image dataset should be structured will be very useful!

    from modelvshuman.datasets import sketch,      
    dataset = sketch(batch_size=16, num_workers=4)
    

    Thank you!

    opened by ahnchive 2
  • Question Regarding Human Raw Dataset

    Question Regarding Human Raw Dataset

    Hi! Thank you for providing a valuable dataset.

    As I dig into the raw data which contains human annotations, some questions raised in my mind. Below is how I analyzed the raw dataset in contrast experiment

    For an image named 0580_cop_dnn_c05_bicycle_10_n03792782_10129.png, I extracted rows in which image_id matches 'c05_bicycle_10_n03792782_10129.png'. Then, I got the following 4 rows. Here, human_annotation is a simple concatenation of all csv files in contrast experiment. image

    Below is the visualization of image 0580_cop_dnn_c05_bicycle_10_n03792782_10129.png image

    Although the image is not very clear, I do not fully agree with the human predictions. (They predicted 'clock', 'oven', 'bear', 'keyboard' -- It is very clear that the figure is not a keyboard.) Is there anything wrong or I missed in the analysis?

    Thanks,

    opened by jiyounglee-0523 2
  • Request for data to reproduce figures

    Request for data to reproduce figures

    Very interesting work and thanks very much for releasing it publicly. We are working on extending some of your results/studies, could you please provide more information related to Figure 2?

    Figure 2 compares several models, but they are not all labeled. I am interested in finding out the accuracy-distortion tradeoff for each model. The figure shows this information at a coarse level, such as to compare all self-supervised models, adversarially trained models, etc. Would be perfect if you could provide the model-name (corresponding to your model zoo) and its corresponding performance at various distortion levels for the 12 distortion types that are considered.

    Thanks again, looking forward to hearing from you!

    opened by vihari 2
  • not able to reproduce the results

    not able to reproduce the results

    Firstly thanks for sharing the interesting paper and code!

    I followed the installation steps but running python examples/evaluate.py (didn't edit anything) resulting the following strange error. Could the authors give some insights on the reasons?

    Plotting accuracy for dataset colour The following model(s) were not found: alexnet List of possible models in this dataset: ['bagnet33' 'resnet50' 'simclr_resnet50x1'] The following model(s) were not found: subject-* List of possible models in this dataset: ['bagnet33' 'resnet50' 'simclr_resnet50x1'] Traceback (most recent call last): File "examples/evaluate.py", line 28, in run_plotting() File "examples/evaluate.py", line 18, in run_plotting figure_directory_name = figure_dirname) File "/home/eric/model_vs_human/modelvshuman/plotting/plot.py", line 108, in plot result_dir=result_dir) File "/home/eric/model_vs_human/modelvshuman/plotting/plot.py", line 744, in plot_accuracy result_dir=result_dir, plot_type="accuracy") File "/home/eric/model_vs_human/modelvshuman/plotting/plot.py", line 772, in plot_general_analyses experiment=e) File "/home/eric/model_vs_human/modelvshuman/plotting/analyses.py", line 254, in get_result_df r = self.analysis(subdat) File "/home/eric/model_vs_human/modelvshuman/plotting/analyses.py", line 284, in analysis self._check_dataframe(df) File "/home/eric/model_vs_human/modelvshuman/plotting/analyses.py", line 24, in _check_dataframe assert len(df) > 0, "empty dataframe" AssertionError: empty dataframe

    opened by largenn 2
  • Question about shape bias

    Question about shape bias

    Could you please provide the formula to get the shape bias? Currently, I can successfully run this repo with my own model, but I am confused about how the shape bias is abtained. I will be very gratefully if you can elaborate it. Thanks!

    opened by YuanLiuuuuuu 1
  • Difficulties loading adversarially trained models

    Difficulties loading adversarially trained models

    I didn't succeed in loading the adversarially trained models. run_evaluation results in the following error:

    HTTP Error 403: Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.

    Any idea how to fix this?

    opened by lukasShuber 1
  • Update license info

    Update license info

    In https://github.com/bethgelab/model-vs-human/blob/master/setup.cfg#L18 we state LGPLv3 but we should instead refer to https://github.com/bethgelab/model-vs-human/tree/master/licenses

    documentation 
    opened by rgeirhos 1
  • Clip Improvements

    Clip Improvements

    Two changes:

    1. Explicitly cast the images to device (previous code caused problems when training on google colab);
    2. Compute the zero shot weights needed by CLIP only once and recycle them for subsequent batches. Since the classes remain fixed, computing this for every batch is not necessary and greatly slows down experiments.
    opened by yurigalindo 1
  • Feature request: Machine readable results, e.g. CSV

    Feature request: Machine readable results, e.g. CSV

    Currently, the toolbox saves Latex tables and plots with the resulting accuracies and error consistencies. For custom plotting routines and similar, it would be great to have these numbers additionally in an easy-to-parse format, e.g. CSV or JSON.

    enhancement 
    opened by dekuenstle 0
  • Feature request: Simpler loading of custom models

    Feature request: Simpler loading of custom models

    Using your toolbox with the built-in models is straightforward, but we would like to compare some custom pytorch models. It would be great to have a routine to add these models (i.e. subclasses of nn.Module) to the toolbox registry from your own script. If this is already possible, it would be great if you could share an example.

    Currently, we add the model inside the toolbox's files which makes extensions complicated and redundant (e.g. name of model in the path, the function name, the plotting routine).

    Thanks David

    enhancement 
    opened by dekuenstle 2
  • BiT models via timm?

    BiT models via timm?

    had difficulties obtaining the BiT models via pytorch image models. I then used:

    e.g.,

    import timm m = timm.create_model('resnetv2_152x12_bitm', pretrained=True)

    in pytorch model_zoo.py.

    This worked perfectly.

    opened by lukasShuber 0
Owner
Bethge Lab
Perceiving Neural Networks
Bethge Lab
Genpass - A Passwors Generator App With Python3

Genpass Welcom again into another python3 App this is simply an Passwors Generat

Mal4D 1 Jan 09, 2022
State-Relabeling Adversarial Active Learning

State-Relabeling Adversarial Active Learning Code for SRAAL [2020 CVPR Oral] Requirements torch = 1.6.0 numpy = 1.19.1 tqdm = 4.31.1 AL Results The

10 Jul 14, 2022
Graph Transformer Architecture. Source code for

Graph Transformer Architecture Source code for the paper "A Generalization of Transformer Networks to Graphs" by Vijay Prakash Dwivedi and Xavier Bres

NTU Graph Deep Learning Lab 561 Jan 08, 2023
Flexible Option Learning - NeurIPS 2021

Flexible Option Learning This repository contains code for the paper Flexible Option Learning presented as a Spotlight at NeurIPS 2021. The implementa

Martin Klissarov 7 Nov 09, 2022
Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments

Fusion-DHL: WiFi, IMU, and Floorplan Fusion for Dense History of Locations in Indoor Environments Paper: arXiv (ICRA 2021) Video : https://youtu.be/CC

Sachini Herath 68 Jan 03, 2023
DAT4 - General Assembly's Data Science course in Washington, DC

DAT4 Course Repository Course materials for General Assembly's Data Science course in Washington, DC (12/15/14 - 3/16/15). Instructors: Sinan Ozdemir

Kevin Markham 779 Dec 25, 2022
(to be released) [NeurIPS'21] Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs

Higher-Order Transformers Kim J, Oh S, Hong S, Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs, NeurIPS 2021. [arxiv] W

Jinwoo Kim 44 Dec 28, 2022
This project provides the code and datasets for 'CapSal: Leveraging Captioning to Boost Semantics for Salient Object Detection', CVPR 2019.

Code-and-Dataset-for-CapSal This project provides the code and datasets for 'CapSal: Leveraging Captioning to Boost Semantics for Salient Object Detec

lu zhang 48 Aug 19, 2022
Learning High-Speed Flight in the Wild

Learning High-Speed Flight in the Wild This repo contains the code associated to the paper Learning Agile Flight in the Wild. For more information, pl

Robotics and Perception Group 391 Dec 29, 2022
This is the official github repository of the Met dataset

The Met dataset This is the official github repository of the Met dataset. The official webpage of the dataset can be found here. What is it? This cod

Nikolaos-Antonios Ypsilantis 35 Dec 17, 2022
Research code for the paper "Variational Gibbs inference for statistical estimation from incomplete data".

Variational Gibbs inference (VGI) This repository contains the research code for Simkus, V., Rhodes, B., Gutmann, M. U., 2021. Variational Gibbs infer

Vaidotas Ε imkus 1 Apr 08, 2022
Codes and pretrained weights for winning submission of 2021 Brain Tumor Segmentation (BraTS) Challenge

Winning submission to the 2021 Brain Tumor Segmentation Challenge This repo contains the codes and pretrained weights for the winning submission to th

94 Dec 28, 2022
Pytorch implementation of "Geometrically Adaptive Dictionary Attack on Face Recognition" (WACV 2022)

Geometrically Adaptive Dictionary Attack on Face Recognition This is the Pytorch code of our paper "Geometrically Adaptive Dictionary Attack on Face R

6 Nov 21, 2022
source code the paper Fast and Robust Iterative Closet Point.

Fast-Robust-ICP This repository includes the source code the paper Fast and Robust Iterative Closet Point. Authors: Juyong Zhang, Yuxin Yao, Bailin De

yaoyuxin 320 Dec 28, 2022
LBK 26 Dec 28, 2022
Code for LIGA-Stereo Detector, ICCV'21

LIGA-Stereo Introduction This is the official implementation of the paper LIGA-Stereo: Learning LiDAR Geometry Aware Representations for Stereo-based

Xiaoyang Guo 75 Dec 09, 2022
Official code for 'Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Learning' [ICCV 2021]

RTFM This repo contains the Pytorch implementation of our paper: Weakly-supervised Video Anomaly Detection with Robust Temporal Feature Magnitude Lear

Yu Tian 242 Jan 08, 2023
🚩🚩🚩

My CTF Challenges 2021 AIS3 Pre-exam / MyFirstCTF Name Category Keywords Difficulty β’Έβ“„β“‹β’Ύβ’Ή-①⑨ (MyFirstCTF Only) Reverse Baby β˜… Piano Reverse C#, .NET β˜…

6 Oct 28, 2021
Official Pytorch implementation of the paper: "Locally Shifted Attention With Early Global Integration"

Locally-Shifted-Attention-With-Early-Global-Integration Pretrained models You can download all the models from here. Training Imagenet python -m torch

Shelly Sheynin 14 Apr 15, 2022
Rainbow DQN implementation that outperforms the paper's results on 40% of games using 20x less data 🌈

Rainbow 🌈 An implementation of Rainbow DQN which outperforms the paper's (Hessel et al. 2017) results on 40% of tested games while using 20x less dat

Dominik Schmidt 31 Dec 21, 2022