The code for our NeurIPS 2021 paper "Kernelized Heterogeneous Risk Minimization".

Overview

Kernelized-HRM

Jiashuo Liu, Zheyuan Hu

The code for our NeurIPS 2021 paper "Kernelized Heterogeneous Risk Minimization"[1]. This repo contains the codes for our Classification with Spurious Correlation and Regression with Selection Bias simulated experiments, including the data generation process, the whole Kernelized-HRM algorithm and the testing process.

Details

There are two files, named KernelHRM_sim1.py and KernelHRM_sim2.py, which contains the code for the classification simulation experiment and the regression simulation experiment, respectively. The details of codes are:

  • generate_data_list: generate data according to the given parameters args.r_list.

  • generate_test_data_list: generate the test data for Selection Bias experiment, where the args.r_list is pre-defined to [-2.9,-2.7,...,-1.9].

  • main_KernelHRM: the whole framework for our Kernelized-HRM algorithm.

Hypermeters

There are many hyper-parameters to be tuned for the whole framework, which are different among different tasks and require users to carefully tune. Note that although we provide the hyper-parameters for the simulated experiments, it is possible that the results are not exactly the same as ours, which may due to the randomness or something else.

Generally, the following hyper-parameters need carefully tuned:

  • k: controls the dimension of reduced neural tangent features
  • whole_epoch: controls the overall number of iterations between the frontend and the backend
  • epochs: controls the number of epochs of optimizing the invariant learning module in each iteration
  • IRM_lam: controls the strength of the regularizer for the invariant learning
  • lr: learning rate
  • cluster_num: controls the number of clusters

Further, for the experimental settings, the following parameters need to be specified:

  • r_list: controls the strength of spurious correlations
  • scramble: similar to IRM[2], whether to mix the raw features
  • num_list: controls the number of data points from each environment

As for the optimal hyper-parameters for our simulation experiments, we put them into the reproduce.sh file.

Others

Similar to HRM[3], we view the proposed Kernelized-HRM as a framework, which converts the non-linear and complicated data into linear and raw feature data by neural tangent kernel and includes the clustering module and the invariant prediction module. In practice, one can replace each model to anything they want with the same effect.

Though I hate to mention it, our method has the following shortcomings:

  • Just like the original HRM[3], the convergence of the frontend module cannot be guaranteed, and we notice that there may be some cases the next iteration does not improve the current results or even hurts.
  • Hyper-parameters for different tasks may be quite different and need to be tuned carefully.
  • Whether this algorithm can be extended to more complicated image data, such as PACS, NICO et al. remains to be seen.(Maybe later we will have a try?)

Reference

[1] Jiasuho Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen. Kernelized Heterogeneous Risk Minimization. In NeurIPS 2021.

[2] Arjovsky M, Bottou L, Gulrajani I, et al. Invariant risk minimization.

[3] Jiashuo Liu, Zheyuan Hu, Peng Cui, Bo Li, Zheyan Shen. Heterogeneous Risk Minimziation. In ICML 2021.

Owner
Liu Jiashuo
THU-TrustAI(THU-TAI) Group
Liu Jiashuo
This is the official PyTorch implementation of our paper: "Artistic Style Transfer with Internal-external Learning and Contrastive Learning".

Artistic Style Transfer with Internal-external Learning and Contrastive Learning This is the official PyTorch implementation of our paper: "Artistic S

51 Dec 20, 2022
BBScan py3 - BBScan py3 With Python

BBScan_py3 This repository is forked from lijiejie/BBScan 1.5. I migrated the fo

baiyunfei 12 Dec 30, 2022
LSTM-VAE Implementation and Relevant Evaluations

LSTM-VAE Implementation and Relevant Evaluations Before using any file in this repository, please create two directories under the root directory name

Lan Zhang 5 Oct 08, 2022
Official git repo for the CHIRP project

CHIRP Project This is the official git repository for the CHIRP project. Pull requests are accepted here, but for the moment, the main repository is s

Dan Smith 77 Jan 08, 2023
Code implementation of Data Efficient Stagewise Knowledge Distillation paper.

Data Efficient Stagewise Knowledge Distillation Table of Contents Data Efficient Stagewise Knowledge Distillation Table of Contents Requirements Image

IvLabs 112 Dec 02, 2022
Title: Heart-Failure-Classification

This Notebook is based off an open source dataset available on where I have created models to classify patients who can potentially witness heart failure on the basis of various parameters. The best

Akarsh Singh 2 Sep 13, 2022
The implementation of "Bootstrapping Semantic Segmentation with Regional Contrast".

ReCo - Regional Contrast This repository contains the source code of ReCo and baselines from the paper, Bootstrapping Semantic Segmentation with Regio

Shikun Liu 128 Dec 30, 2022
TDmatch is a Python library developed to perform matching tasks in three categories:

TDmatch TDmatch is a Python library developed to perform matching tasks in three categories: Text to Data which matches tuples of a table to text docu

Naser Ahmadi 5 Aug 11, 2022
Data labels and scripts for fastMRI.org

fastMRI+: Clinical pathology annotations for the fastMRI dataset The fastMRI dataset is a publicly available MRI raw (k-space) dataset. It has been us

Microsoft 51 Dec 22, 2022
Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions

README Repository containing the code for the paper "Safe Model-Based Reinforcement Learning using Robust Control Barrier Functions". Specifically, an

Yousef Emam 13 Nov 24, 2022
This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

1 Oct 25, 2021
Course content and resources for the AIAIART course.

AIAIART course This repo will house the notebooks used for the AIAIART course. Part 1 (first four lessons) ran via Discord in September/October 2021.

Jonathan Whitaker 492 Jan 06, 2023
PyTorch implementation of Trust Region Policy Optimization

PyTorch implementation of TRPO Try my implementation of PPO (aka newer better variant of TRPO), unless you need to you TRPO for some specific reasons.

Ilya Kostrikov 366 Nov 15, 2022
Codebase for INVASE: Instance-wise Variable Selection - 2019 ICLR

Codebase for "INVASE: Instance-wise Variable Selection" Authors: Jinsung Yoon, James Jordon, Mihaela van der Schaar Paper: Jinsung Yoon, James Jordon,

Jinsung Yoon 50 Nov 11, 2022
A simple but complete full-attention transformer with a set of promising experimental features from various papers

x-transformers A concise but fully-featured transformer, complete with a set of promising experimental features from various papers. Install $ pip ins

Phil Wang 2.3k Jan 03, 2023
CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation

CoCosNet v2: Full-Resolution Correspondence Learning for Image Translation (CVPR 2021, oral presentation) CoCosNet v2: Full-Resolution Correspondence

Microsoft 308 Dec 07, 2022
ImageNet-CoG is a benchmark for concept generalization. It provides a full evaluation framework for pre-trained visual representations which measure how well they generalize to unseen concepts.

The ImageNet-CoG Benchmark Project Website Paper (arXiv) Code repository for the ImageNet-CoG Benchmark introduced in the paper "Concept Generalizatio

NAVER 23 Oct 09, 2022
Magic tool for managing internet connection in local network by @zalexdev

Megacut ✂️ A new powerful Python3 tool for managing internet on a local network Installation git clone https://github.com/stryker-project/megacut cd m

Stryker 12 Dec 15, 2022
Unsupervised 3D Human Mesh Recovery from Noisy Point Clouds

Unsupervised 3D Human Mesh Recovery from Noisy Point Clouds Xinxin Zuo, Sen Wang, Minglun Gong, Li Cheng Prerequisites We have tested the code on Ubun

41 Dec 12, 2022
Code for the CVPR 2021 paper: Understanding Failures of Deep Networks via Robust Feature Extraction

Welcome to Barlow Barlow is a tool for identifying the failure modes for a given neural network. To achieve this, Barlow first creates a group of imag

Sahil Singla 33 Dec 05, 2022