Code base for the paper "Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation"

Overview

This repository contains code for the paper Scalable One-Pass Optimisation of High-Dimensional Weight-Update Hyperparameters by Implicit Differentiation.

Installation

Our dependencies are fully specified in Pipfile, which can be supplied to pipenv to install the environment. One failsafe approach is to install pipenv in a fresh virtual environment, then run pipenv install in this directory. Note the Pipfile specifies our Python 3.9 development environment; most experiments were run in an identical environment under Python 3.7 instead.

Difficulties with CUDA versions meant we had to manually install PyTorch and Torchvision rather than use pipenv --- the corresponding lines in Pipfile may need adjustment for your use case. Alternatively, use the list of dependencies as a guide to what to install yourself with pip, or use the full dump of our development environment in final_requirements.txt.

Datasets may not be bundled with the repository, but are expected to be found at locations specified in datasets.py, preprocessed into single PyTorch tensors of all the input and output data (generally data/<dataset>/data.pt and data/<dataset>/targets.pt).

Configuration

Training code is controlled with YAML configuration files, as per the examples in configs/. Generally one file is required to specify the dataset, and a second to specify the algorithm, using the obvious naming convention. Brief help text is available on the command line, but the meanings of each option should be reasonably self-explanatory.

For Ours (WD+LR), use the file Ours_LR.yaml; for Ours (WD+LR+M), use the file Ours_LR_Momentum.yaml; for Ours (WD+HDLR+M), use the file Ours_HDLR_Momentum.yaml. For Long/Medium/Full Diff-through-Opt, we provide separate configuration files for the UCI cases and the Fashion-MNIST cases.

We provide two additional helper configurations. Random_Validation.yaml copies Random.yaml, but uses the entire validation set to compute the validation loss at each logging step. This allows for stricter analysis of the best-performing run at particular time steps, for instance while constructing Random (3-batched). Random_Validation_BayesOpt.yaml only forces the use of the entire dataset for the very last validation loss computation, so that Bayesian Optimisation runs can access reliable performance metrics without adversely affecting runtime.

The configurations provided match those necessary to replicate the main experiments in our paper (in Section 4: Experiments). Other trials, such as those in the Appendix, will require these configurations to be modified as we describe in the paper. Note especially that our three short-horizon bias studies all require different modifications to the LongDiffThroughOpt_*.yaml configurations.

Running

Individual runs are commenced by executing train.py and passing the desired configuration files with the -c flag. For example, to run the default Fashion-MNIST experiments using Diff-through-Opt, use:

$ python train.py -c ./configs/fashion_mnist.yaml ./configs/DiffThroughOpt.yaml

Bayesian Optimisation runs are started in a similar way, but with a call to bayesopt.py rather than train.py.

For executing multiple runs in parallel, parallel_exec.py may be useful: modify the main function call at the bottom of the file as required, then call this file instead of train.py at the command line. The number of parallel workers may be specified by num_workers. Any configurations passed at the command line are used as a base, to which modifications may be added by override_generator. The latter should either be a function which generates one override dictionary per call (in which case num_repetitions sets the number of overrides to generate), or a function which returns a generator over configurations (in which case set num_repetitions = None). Each configuration override is run once for each of algorithms, whose configurations are read automatically from the corresponding files and should not be explicitly passed at the command line. Finally, main_function may be used to switch between parallel calls to train.py and bayesopt.py as required.

For blank-slate replications, the most useful override generators will be natural_sgd_generator, which generates a full SGD initialisation in the ranges we use, and iteration_id, which should be used with Bayesian Optimisation runs to name each parallel run using a counter. Other generators may be useful if you wish to supplement existing results with additional algorithms etc.

PennTreebank and CIFAR-10 were executed on clusters running SLURM; the corresponding subfolders contain configuration scripts for these experiments, and submit.sh handles the actual job submission.

Analysis

By default, runs are logged in Tensorboard format to the ./runs directory, where Tensorboard may be used to inspect the results. If desired, a descriptive name can be appended to a particular execution using the -n switch on the command line. Runs can optionally be written to a dedicated subfolder specified with the -g switch, and the base folder for logging can be changed with the -l switch.

If more precise analysis is desired, pass the directory containing the desired results to util.get_tags(), which will return a dictionary of the evolution of each logged scalar in the results. Note that this function uses Tensorboard calls which predate its --load_fast option, so may take tens of minutes to return.

This data dictionary can be passed to one of the more involved plotting routines in figures.py to produce specific plots. The script paper_plots.py generates all the plots we use in our paper, and may be inspected for details of any particular plot.

Rede Neural Convolucional feita durante o processo seletivo do Laboratório de Inteligência Artificial da FACOM (UFMS)

Primeira_Rede_Neural_Convolucional Rede Neural Convolucional feita durante o processo seletivo do Laboratório de Inteligência Artificial da FACOM (UFM

Roney_Felipe 1 Jan 13, 2022
FeTaQA: Free-form Table Question Answering

FeTaQA: Free-form Table Question Answering FeTaQA is a Free-form Table Question Answering dataset with 10K Wikipedia-based {table, question, free-form

Language, Information, and Learning at Yale 40 Dec 13, 2022
A PaddlePaddle version of Neural Renderer, refer to its PyTorch version

Neural 3D Mesh Renderer in PadddlePaddle A PaddlePaddle version of Neural Renderer, refer to its PyTorch version Install Run: pip install neural-rende

AgentMaker 13 Jul 12, 2022
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

Djordje Miladinovic 34 Jan 19, 2022
Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing

Cerberus Transformer: Joint Semantic, Affordance and Attribute Parsing Paper Introduction Multi-task indoor scene understanding is widely considered a

62 Dec 05, 2022
Pytorch code for our paper Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains)

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains (ICLR'2022) This is the Pytorch code for our paper Beyond ImageNet

Alibaba-AAIG 37 Nov 23, 2022
Deep Inside Convolutional Networks - This is a caffe implementation to visualize the learnt model

Deep Inside Convolutional Networks This is a caffe implementation to visualize the learnt model. Part of a class project at Georgia Tech Problem State

Jigar 61 Apr 15, 2022
the code of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021)

RMA-Net This repo is the implementation of the paper: Recurrent Multi-view Alignment Network for Unsupervised Surface Registration (CVPR 2021). Paper

Wanquan Feng 205 Nov 09, 2022
Code for approximate graph reduction techniques for cardinality-based DSFM, from paper

SparseCard Code for approximate graph reduction techniques for cardinality-based DSFM, from paper "Approximate Decomposable Submodular Function Minimi

Nate Veldt 1 Nov 25, 2022
Image data augmentation scheduler for albumentations transforms

albu_scheduler Scheduler for albumentations transforms based on PyTorch schedulers interface Usage TransformMultiStepScheduler import albumentations a

19 Aug 04, 2021
Tensorflow implementation and notebooks for Implicit Maximum Likelihood Estimation

tf-imle Tensorflow 2 and PyTorch implementation and Jupyter notebooks for Implicit Maximum Likelihood Estimation (I-MLE) proposed in the NeurIPS 2021

NEC Laboratories Europe 69 Dec 13, 2022
A pytorch-based real-time segmentation model for autonomous driving

CFPNet: Channel-Wise Feature Pyramid for Real-Time Semantic Segmentation This project contains the Pytorch implementation for the proposed CFPNet: pap

342 Dec 22, 2022
This is a vision-based 3d model manipulation and control UI

Manipulation of 3D Models Using Hand Gesture This program allows user to manipulation 3D models (.obj format) with their hands. The project support bo

Cortic Technology Corp. 43 Oct 23, 2022
Robust, modular and efficient implementation of advanced Hamiltonian Monte Carlo algorithms

AdvancedHMC.jl AdvancedHMC.jl provides a robust, modular and efficient implementation of advanced HMC algorithms. An illustrative example for Advanced

The Turing Language 167 Jan 01, 2023
Finetune SSL models for MOS prediction

Finetune SSL models for MOS prediction This is code for our paper under review for ICASSP 2022: "Generalization Ability of MOS Prediction Networks" Er

Yamagishi and Echizen Laboratories, National Institute of Informatics 32 Nov 22, 2022
Official code of Team Yao at Multi-Modal-Fact-Verification-2022

Official code of Team Yao at Multi-Modal-Fact-Verification-2022 A Multi-Modal Fact Verification dataset released as part of the De-Factify workshop in

Wei-Yao Wang 11 Nov 15, 2022
Some pvbatch (paraview) scripts for postprocessing OpenFOAM data

pvbatchForFoam Some pvbatch (paraview) scripts for postprocessing OpenFOAM data For every script there is a help message available: pvbatch pv_state_s

Morev Ilya 2 Oct 26, 2022
Biomarker identification for COVID-19 Severity in BALF cells Single-cell RNA-seq data

scBALF Covid-19 dataset Analysis Here is the Github page that has the codes for the bioinformatics pipeline described in the paper COVID-Datathon: Bio

Nami Niyakan 2 May 21, 2022
Code for the CVPR2021 workshop paper "Noise Conditional Flow Model for Learning the Super-Resolution Space"

NCSR: Noise Conditional Flow Model for Learning the Super-Resolution Space Official NCSR training PyTorch Code for the CVPR2021 workshop paper "Noise

57 Oct 03, 2022
Visual Question Answering in Pytorch

Visual Question Answering in pytorch /!\ New version of pytorch for VQA available here: https://github.com/Cadene/block.bootstrap.pytorch This repo wa

Remi 672 Jan 01, 2023