source code for 'Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge' by A. Shah, K. Shanmugam, K. Ahuja

Overview

Source code for "Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge"

Reference: Abhin Shah, Karthikeyan Shanmugam, Kartik Ahuja, "Finding Valid Adjustments under Non-ignorability with Minimal DAG Knowledge," The 25th International Conference on Artificial Intelligence and Statistics (AISTATS), 2022

Contact: [email protected]

Arxiv: https://arxiv.org/pdf/2106.11560.pdf

Dependencies:

In order to successfully execute the code, the following libraries must be installed:

  1. Python --- causallib, sklearn, multiprocessing, contextlib, scipy, functools, pandas, numpy, itertools, random, argparse, time, matplotlib, pickle, pyreadr, rpy2, torch

  2. R --- RCIT

Command inputs:

  • nr: number of repetitions (default = 100)
  • no: number of observations (default = 50000)
  • use_t_in_e: indicator for whether t should be used to generate e (default = 1)
  • ne: number of environments (default = 3)
  • number_IRM_iterations - number of iterations of IRM (default = 15000)
  • nrd - number of features for sparse subset search (default = 5)

Reproducing the figures and tables:

  1. To reproduce Figure 3a and Figure 10a, run the following three commands:
$ mkdir synthetic_theory
$ python3 -W ignore synthetic_theory.py --nr 100
$ python3 plot_synthetic_theory.py --nr 100
  1. To reproduce Figure 3b and Figure 10b, run the following three commands:
$ mkdir synthetic_algorithms
$ python3 -W ignore synthetic_algorithms.py --nr 100
$ python3 plot_synthetic_algorithms.py --nr 100
  1. To reproduce Figure 3c, run the following three commands:
$ mkdir synthetic_high_dimension
$ python3 -W ignore synthetic_high_dimension.py --nr 100
$ python3 plot_synthetic_high_dimension.py --nr 100
  1. To reproduce Table 1, run the following two commands:
$ mkdir syn-entner 
$ python3 -W ignore syn-entner --nr 100
  1. To reproduce Table 2, run the following two commands:
$ mkdir syn-cheng 
$ python3 -W ignore syn-cheng --nr 100
  1. To reproduce Figure 4, Figure 12a and Figure 12b, run the following three commands:
$ mkdir ihdp
$ python3 -W ignore ihdp.py --nr 100
$ python3 plot_ihdp.py --nr 100
  1. To reproduce Figure 5, run the following three commands:
$ mkdir cattaneo
$ python3 -W ignore cattaneo.py --nr 100
$ python3 plot_cattaneo.py --nr 100
  1. To reproduce Figure 11a and Figure 11c, run the following three commands:
$ mkdir synthetic_theory
$ python3 -W ignore synthetic_theory.py --nr 100 --use_t_in_e 0
$ python3 plot_synthetic_theory.py --nr 100 --use_t_in_e 0
  1. To reproduce Figure 11b and Figure 11d, run the following three commands:
$ mkdir synthetic_algorithms
$ python3 -W ignore synthetic_algorithms.py --nr 100 --use_t_in_e 0
$ python3 plot_ synthetic_algorithms.py --nr 100 --use_t_in_e 0
Owner
Abhin Shah
Graduate student at MIT. Former undergrad at IITBombay. Former intern at IBM and EPFL
Abhin Shah
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

386 Jan 01, 2023
Quantify the difference between two arbitrary curves in space

similaritymeasures Quantify the difference between two arbitrary curves Curves in this case are: discretized by inidviudal data points ordered from a

Charles Jekel 175 Jan 08, 2023
LBK 20 Dec 02, 2022
An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

An implementation of the methods presented in Causal-BALD: Deep Bayesian Active Learning of Outcomes to Infer Treatment-Effects from Observational Data.

Andrew Jesson 9 Apr 04, 2022
A toy project using OpenCV and PyMunk

A toy project using OpenCV, PyMunk and Mediapipe the source code for my LindkedIn post It's just a toy project and I didn't write a documentation yet,

Amirabbas Asadi 82 Oct 28, 2022
[CVPR 2020] Transform and Tell: Entity-Aware News Image Captioning

Transform and Tell: Entity-Aware News Image Captioning This repository contains the code to reproduce the results in our CVPR 2020 paper Transform and

Alasdair Tran 85 Dec 13, 2022
Orthogonal Over-Parameterized Training

The inductive bias of a neural network is largely determined by the architecture and the training algorithm. To achieve good generalization, how to effectively train a neural network is of great impo

Weiyang Liu 11 Apr 18, 2022
Process JSON files for neural recording sessions using Medtronic's BrainSense Percept PC neurostimulator

percept_processing This code processes JSON files for streamed neural data using Medtronic's Percept PC neurostimulator with BrainSense Technology for

Maria Olaru 3 Jun 06, 2022
HuSpaCy: industrial-strength Hungarian natural language processing

HuSpaCy: Industrial-strength Hungarian NLP HuSpaCy is a spaCy model and a library providing industrial-strength Hungarian language processing faciliti

HuSpaCy 120 Dec 14, 2022
SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals

SE-MSCNN: A Lightweight Multi-scaled Fusion Network for Sleep Apnea Detection Using Single-Lead ECG Signals Abstract Sleep apnea (SA) is a common slee

9 Dec 21, 2022
Accurate Phylogenetic Inference with Symmetry-Preserving Neural Networks

Accurate Phylogenetic Inference with a Symmetry-preserving Neural Network Model Claudia Solis-Lemus Shengwen Yang Leonardo Zepeda-Núñez This repositor

Leonardo Zepeda-Núñez 2 Feb 11, 2022
Implementation of Pix2Seq in PyTorch

pix2seq-pytorch Implementation of Pix2Seq paper Different from the paper image input size 1280 bin size 1280 LambdaLR scheduler used instead of Linear

Tony Shin 9 Dec 15, 2022
This is a deep learning-based method to segment deep brain structures and a brain mask from T1 weighted MRI.

DBSegment This tool generates 30 deep brain structures segmentation, as well as a brain mask from T1-Weighted MRI. The whole procedure should take ~1

Luxembourg Neuroimaging (Platform OpNeuroImg) 2 Oct 25, 2022
Official implement of "CAT: Cross Attention in Vision Transformer".

CAT: Cross Attention in Vision Transformer This is official implement of "CAT: Cross Attention in Vision Transformer". Abstract Since Transformer has

100 Dec 15, 2022
MACE is a deep learning inference framework optimized for mobile heterogeneous computing platforms.

Documentation | FAQ | Release Notes | Roadmap | MACE Model Zoo | Demo | Join Us | 中文 Mobile AI Compute Engine (or MACE for short) is a deep learning i

Xiaomi 4.7k Dec 29, 2022
EZ graph is an easy to use AI solution that allows you to make and train your neural networks without a single line of code.

EZ-Graph EZ Graph is a GUI that allows users to make and train neural networks without writing a single line of code. Requirements python 3 pandas num

1 Jul 03, 2022
Code release for Convolutional Two-Stream Network Fusion for Video Action Recognition

Convolutional Two-Stream Network Fusion for Video Action Recognition

Christoph Feichtenhofer 676 Dec 31, 2022
A comprehensive and up-to-date developer education platform for Urbit.

curriculum A comprehensive and up-to-date developer education platform for Urbit. This project organizes developer capabilities into a hierarchy of co

Sigilante 36 Oct 04, 2022
The Empirical Investigation of Representation Learning for Imitation (EIRLI)

The Empirical Investigation of Representation Learning for Imitation (EIRLI)

Center for Human-Compatible AI 31 Nov 06, 2022
MIMIC Code Repository: Code shared by the research community for the MIMIC-III database

MIMIC Code Repository The MIMIC Code Repository is intended to be a central hub for sharing, refining, and reusing code used for analysis of the MIMIC

MIT Laboratory for Computational Physiology 1.8k Dec 26, 2022