PaRT: Parallel Learning for Robust and Transparent AI

Related tags

Deep LearningPaRT
Overview

PaRT: Parallel Learning for Robust and Transparent AI

This repository contains the code for PaRT, an algorithm for training a base network on multiple tasks in parallel. The diagram of PaRT is shown in the figure below.

Below, we provide details regarding dependencies and the instructions for running the code for each experiment. We have prepared scripts for each experiment to help the user have a smooth experience.

Dependencies

  • python >= 3.8
  • pytorch >= 1.7
  • scikit-learn
  • torchvision
  • tensorboard
  • matplotlib
  • pillow
  • psutil
  • scipy
  • numpy
  • tqdm

SETUP ENVIRONMENT

To setup the conda env and create the required directories go to the scripts directory and run the following commands in the terminal:

conda init bash
bash -i setupEnv.sh

Check that the final output of these commands is:

Installed torch version {---}
Virtual environment was made successfully

CIFAR 100 EXPERIMENTS

Instructions to run the code for the CIFAR100 experiments:

--------------------- BASELINE EXPERIMENTS ---------------------

To run the baseline experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR100Baseline.sh ../../scripts/test_case0_cifar100_baseline.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar100_baseline.json to 1,2,3, or 4.

--------------------- PARALLEL EXPERIMENTS ---------------------

To run the parallel experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR100Parallel.sh ../../scripts/test_case0_cifar100_parallel.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar100_parallel.json to 1,2,3, or 4.

CIFAR 10 AND CIFAR 100 EXPERIMENTS

Instructions to run the code for the CIFAR10 and CIFAR100 experiments:

--------------------- BASELINE EXPERIMENTS ---------------------

To run the parallel experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR10_100Baseline.sh ../../scripts/test_case0_cifar10_100_baseline.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar10_100_baseline.json to 1,2,3, or 4.

--------------------- PARALLEL EXPERIMENTS ---------------------

To run the baseline experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i runCIFAR10_100Parallel.sh ../../scripts/test_case0_cifar10_100_parallel.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_cifar10_100_parallel.json to 1,2,3, or 4.

FIVETASKS EXPERIMENTS

The dataset for this experiment can be downloaded from the link provided by the CPG GitHub Page or Here. Instructions to run the code for the FiveTasks experiments:

--------------------- BASELINE EXPERIMENTS ---------------------

To run the baseline experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i run5TasksBaseline.sh ../../scripts/test_case0_5tasks_baseline.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_5tasks_baseline.json to 1,2,3, or 4.

--------------------- PARALLEL EXPERIMENTS ---------------------

To run the parallel experiments for the first seed, go to the scripts directory and run the following command in the terminal:

bash -i run5TasksParallel.sh ../../scripts/test_case0_5tasks_parallel.json

To run the experiment for other seeds, simply change the value of test_case in test_case0_5tasks_parallel.json to 1,2,3, or 4.

Paper

Please cite our paper:

Paknezhad, M., Rengarajan, H., Yuan, C., Suresh, S., Gupta, M., Ramasamy, S., Lee H. K., PaRT: Parallel Learning Towards Robust and Transparent AI, arXiv:2201.09534 (2022)

Owner
Mahsa
I develop DL, ML, computer vision, and image processing algorithms for problems in deep learning and medical domain.
Mahsa
Large-scale language modeling tutorials with PyTorch

Large-scale language modeling tutorials with PyTorch 안녕하세요. 저는 TUNiB에서 머신러닝 엔지니어로 근무 중인 고현웅입니다. 이 자료는 대규모 언어모델 개발에 필요한 여러가지 기술들을 소개드리기 위해 마련하였으며 기본적으로

TUNiB 172 Dec 29, 2022
Code for "Reconstructing 3D Human Pose by Watching Humans in the Mirror", CVPR 2021 oral

Reconstructing 3D Human Pose by Watching Humans in the Mirror Qi Fang*, Qing Shuai*, Junting Dong, Hujun Bao, Xiaowei Zhou CVPR 2021 Oral The videos a

ZJU3DV 178 Dec 13, 2022
QueryInst: Parallelly Supervised Mask Query for Instance Segmentation

QueryInst is a simple and effective query based instance segmentation method driven by parallel supervision on dynamic mask heads, which outperforms previous arts in terms of both accuracy and speed.

Hust Visual Learning Team 386 Jan 08, 2023
EsViT: Efficient self-supervised Vision Transformers

Efficient Self-Supervised Vision Transformers (EsViT) PyTorch implementation for EsViT, built with two techniques: A multi-stage Transformer architect

Microsoft 352 Dec 25, 2022
The UI as a mobile display for OP25

OP25 Mobile Control Head A 'remote' control head that interfaces with an OP25 instance. We take advantage of some data end-points left exposed for the

Sarah Rose Giddings 13 Dec 28, 2022
mmfewshot is an open source few shot learning toolbox based on PyTorch

OpenMMLab FewShot Learning Toolbox and Benchmark

OpenMMLab 514 Dec 28, 2022
Official PyTorch Implementation of Unsupervised Learning of Scene Flow Estimation Fusing with Local Rigidity

UnRigidFlow This is the official PyTorch implementation of UnRigidFlow (IJCAI2019). Here are two sample results (~10MB gif for each) of our unsupervis

Liang Liu 28 Nov 16, 2022
An implementation of the proximal policy optimization algorithm

PPO Pytorch C++ This is an implementation of the proximal policy optimization algorithm for the C++ API of Pytorch. It uses a simple TestEnvironment t

Martin Huber 59 Dec 09, 2022
Auxiliary data to the CHIIR paper Searching to Learn with Instructional Scaffolding

Searching to Learn with Instructional Scaffolding This is the data and analysis code for the paper "Searching to Learn with Instructional Scaffolding"

Arthur Câmara 2 Mar 02, 2022
Connecting Java/ImgLib2 + Python/NumPy

imglyb imglyb aims at connecting two worlds that have been seperated for too long: Python with numpy Java with ImgLib2 imglyb uses jpype to access num

ImgLib2 29 Dec 21, 2022
Code for EMNLP 2021 main conference paper "Text AutoAugment: Learning Compositional Augmentation Policy for Text Classification"

Text-AutoAugment (TAA) This repository contains the code for our paper Text AutoAugment: Learning Compositional Augmentation Policy for Text Classific

LancoPKU 105 Jan 03, 2023
IRON Kaggle project done while doing IRONHACK Bootcamp where we had to analyze and use a Machine Learning Project to predict future sales

IRON Kaggle project done while doing IRONHACK Bootcamp where we had to analyze and use a Machine Learning Project to predict future sales. In this case, we ended up using XGBoost because it was the o

1 Jan 04, 2022
Deep Convolutional Generative Adversarial Networks

Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks Alec Radford, Luke Metz, Soumith Chintala All images in t

Alec Radford 3.4k Dec 29, 2022
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation (Findings of EMNLP 2021))

PTvsBT On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation (Findings of EMNLP 2021) Citation Please cite a

Sunbow Liu 10 Nov 25, 2022
TorchX: A PyTorch Extension Library for More Efficient Deep Learning

TorchX TorchX: A PyTorch Extension Library for More Efficient Deep Learning. @misc{torchx, author = {Ansheng You and Changxu Wang}, title = {T

Donny You 8 May 28, 2022
HGCAE Pytorch implementation. CVPR2021 accepted.

Hyperbolic Graph Convolutional Auto-Encoders Accepted to CVPR2021 🎉 Official PyTorch code of Unsupervised Hyperbolic Representation Learning via Mess

Junho Cho 37 Nov 13, 2022
CVPR 2021 Challenge on Super-Resolution Space

Learning the Super-Resolution Space Challenge NTIRE 2021 at CVPR Learning the Super-Resolution Space challenge is held as a part of the 6th edition of

andreas 104 Oct 26, 2022
Development Kit for the SoccerNet Challenge

SoccerNetv2-DevKit Welcome to the SoccerNet-V2 Development Kit for the SoccerNet Benchmark and Challenge. This kit is meant as a help to get started w

Silvio Giancola 117 Dec 30, 2022
Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge

Neural Lexicon Reader: Reduce Pronunciation Errors in End-to-end TTS by Leveraging External Textual Knowledge This is an implementation of the paper,

Mutian He 19 Oct 14, 2022