This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search"

Overview

InvariantAncestrySearch

This repository contains python code necessary to replicated the experiments performed in our paper "Invariant Ancestry Search".

Structure of the repository

The repository is structured in the following manner:

  • In the folder /InvariantAncestrySearch there are two important files:
    • utils.py contains a class DataGenerator which we use for sampling SCMs and data from said sampled SCMs. This, can for instance be done by the sequence
    from InvariantAncestrySearch import DataGenerator
    
    SCM1 = DataGenerator(d = 10, N_interventions = 5, p_conn = 2 / 10, InterventionStrength = 1) # This is an SCM generator
    SCM1.SampleDAG()  # Generates a DAG with d = 10 predictor nodes, 5 interventions and roughly d + 1 edges between the (d + 1)-sized subgraph of (X, Y)
    SCM1.BuildCoefMatrix  # Samples coefficients for the linear assignments -- interventions have strength 1
    data1 = SCM1.MakeData(100)  # Generates 100 samples from SCM1
    
    SCM2 = DataGenerator(d = 6, N_interventions = 1, p_conn = 2 / 6, InterventionStrength = 0.5) # And this is also an SCM generator
    SCM2.SampleDAG()  # Generates a DAG with d = 6 predictor nodes, 1 intervention and roughly d + 1 edges between the (d + 1)-sized subgraph of (X, Y)
    SCM2.BuildCoefMatrix  # Samples coefficients for the linear assignments -- interventions have strength 1
    data2 = SCM2.MakeData(1000)  # Generates 1000 samples from SCM2
    
    • IASfunctions.py includes all relevant functions used in the scripts, e.g., to test for minimal invariance or compute the set of all minimally invariant sets. All functions are documentated.
  • In the folder /simulation_scripts there are scripts to reproduce all experiments performed in the paper. These too documentation inside them. The functions run out-of-the-box, if all necessary libraries are installed and do not need to be run in a certain order.
  • In the folder /output/ there are database files, saved from running the scripts in /simulation_scripts/. These contain the data used to make all figures in the paper and can be opened with the python library shelve.
  • The file requirements.txt contains info on which modules are required to run the code. Note also that an R installation is required as well as the R package dagitty
Owner
Phillip Bredahl Mogensen
I'm Phillip Bredahl Mogensen, a Ph.D. student in statistics at the University of Copenhagen
Phillip Bredahl Mogensen
FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face Extraction

FaceExtraction FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face Extraction Occlusions often occur in face images in the wild, tr

16 Dec 14, 2022
Official code of paper: MovingFashion: a Benchmark for the Video-to-Shop Challenge

SEAM Match-RCNN Official code of MovingFashion: a Benchmark for the Video-to-Shop Challenge paper Installation Requirements: Pytorch 1.5.1 or more rec

HumaticsLAB 31 Oct 10, 2022
This project contains an implemented version of Face Detection using OpenCV and Mediapipe. This is a code snippet and can be used in projects.

Live-Face-Detection Project Description: In this project, we will be using the live video feed from the camera to detect Faces. It will also detect so

Hassan Shahzad 3 Oct 02, 2021
Mengzi Pretrained Models

中文 | English Mengzi 尽管预训练语言模型在 NLP 的各个领域里得到了广泛的应用,但是其高昂的时间和算力成本依然是一个亟需解决的问题。这要求我们在一定的算力约束下,研发出各项指标更优的模型。 我们的目标不是追求更大的模型规模,而是轻量级但更强大,同时对部署和工业落地更友好的模型。

Langboat 424 Jan 04, 2023
Evolutionary Scale Modeling (esm): Pretrained language models for proteins

Evolutionary Scale Modeling This repository contains code and pre-trained weights for Transformer protein language models from Facebook AI Research, i

Meta Research 1.6k Jan 09, 2023
Anderson Acceleration for Deep Learning

Anderson Accelerated Deep Learning (AADL) AADL is a Python package that implements the Anderson acceleration to speed-up the training of deep learning

Oak Ridge National Laboratory 7 Nov 24, 2022
In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

3 Nov 23, 2022
EdiBERT, a generative model for image editing

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
Project page for our ICCV 2021 paper "The Way to my Heart is through Contrastive Learning"

The Way to my Heart is through Contrastive Learning: Remote Photoplethysmography from Unlabelled Video This is the official project page of our ICCV 2

36 Jan 06, 2023
MNIST, but with Bezier curves instead of pixels

bezier-mnist This is a work-in-progress vector version of the MNIST dataset. Samples Here are some samples from the training set. Note that, while the

Alex Nichol 15 Jan 16, 2022
A simple log parser and summariser for IIS web server logs

IISLogFileParser A basic parser tool for IIS Logs which summarises findings from the log file. Inspired by the Gist https://gist.github.com/wh13371/e7

2 Mar 26, 2022
This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

This is an open-source toolkit for Heterogeneous Graph Neural Network(OpenHGNN) based on DGL [Deep Graph Library] and PyTorch.

BUPT GAMMA Lab 519 Jan 02, 2023
The original weights of some Caffe models, ported to PyTorch.

pytorch-caffe-models This repo contains the original weights of some Caffe models, ported to PyTorch. Currently there are: GoogLeNet (Going Deeper wit

Katherine Crowson 9 Nov 04, 2022
Official PyTorch implementation of "Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning" (ICCV2021 Oral)

MeTAL - Meta-Learning with Task-Adaptive Loss Function for Few-Shot Learning (ICCV2021 Oral) Sungyong Baik, Janghoon Choi, Heewon Kim, Dohee Cho, Jaes

Sungyong Baik 44 Dec 29, 2022
TensorFlow-based implementation of "Pyramid Scene Parsing Network".

PSPNet_tensorflow Important Code is fine for inference. However, the training code is just for reference and might be only used for fine-tuning. If yo

HsuanKung Yang 323 Dec 20, 2022
Implements Gradient Centralization and allows it to use as a Python package in TensorFlow

Gradient Centralization TensorFlow This Python package implements Gradient Centralization in TensorFlow, a simple and effective optimization technique

Rishit Dagli 101 Nov 01, 2022
A tutorial on DataFrames.jl prepared for JuliaCon2021

JuliaCon2021 DataFrames.jl Tutorial This is a tutorial on DataFrames.jl prepared for JuliaCon2021. A video recording of the tutorial is available here

Bogumił Kamiński 106 Jan 09, 2023
Pytorch implementation of "Neural Wireframe Renderer: Learning Wireframe to Image Translations"

Neural Wireframe Renderer: Learning Wireframe to Image Translations Pytorch implementation of ideas from the paper Neural Wireframe Renderer: Learning

Yuan Xue 7 Nov 14, 2022
Gesture-Volume-Control - This Python program can adjust the system's volume by using hand gestures

Gesture-Volume-Control This Python program can adjust the system's volume by usi

VatsalAryanBhatanagar 1 Dec 30, 2021
Exploiting Robust Unsupervised Video Person Re-identification

Exploiting Robust Unsupervised Video Person Re-identification Implementation of the proposed uPMnet. For the preprint, please refer to [Arxiv]. Gettin

1 Apr 09, 2022