Application of the L2HMC algorithm to simulations in lattice QCD.

Overview

l2hmc-qcd CodeFactor

📊 Slides

📒 Example Notebook


Overview

The L2HMC algorithm aims to improve upon HMC by optimizing a carefully chosen loss function which is designed to minimize autocorrelations within the Markov Chain, thereby improving the efficiency of the sampler.

This work is based on the original implementation: brain-research/l2hmc/.

A detailed description of the L2HMC algorithm can be found in the paper:

Generalizing Hamiltonian Monte Carlo with Neural Network

by Daniel Levy, Matt D. Hoffman and Jascha Sohl-Dickstein.

Broadly, given an analytically described target distribution, π(x), L2HMC provides a statistically exact sampler that:

  • Quickly converges to the target distribution (fast burn-in).
  • Quickly produces uncorrelated samples (fast mixing).
  • Is able to efficiently mix between energy levels.
  • Is capable of traversing low-density zones to mix between modes (often difficult for generic HMC).

L2HMC for LatticeQCD

Goal: Use L2HMC to efficiently generate gauge configurations for calculating observables in lattice QCD.

A detailed description of the (ongoing) work to apply this algorithm to simulations in lattice QCD (specifically, a 2D U(1) lattice gauge theory model) can be found in doc/main.pdf.

l2hmc-qcd poster

Organization

Dynamics / Network

The base class for the augmented L2HMC leapfrog integrator is implemented in the BaseDynamics (a tf.keras.Model object).

The GaugeDynamics is a subclass of BaseDynamics containing modifications for the 2D U(1) pure gauge theory.

The network is defined in l2hmc-qcd/network/functional_net.py.

Network Architecture

An illustration of the leapfrog layer updating (x, v) --> (x', v') can be seen below.

leapfrog layer

Lattice

Lattice code can be found in lattice.py, specifically the GaugeLattice object that provides the base structure on which our target distribution exists.

Additionally, the GaugeLattice object implements a variety of methods for calculating physical observables such as the average plaquette, ɸₚ, and the topological charge Q,

Training

The training loop is implemented in l2hmc-qcd/utils/training_utils.py .

To train the sampler on a 2D U(1) gauge model using the parameters specified in bin/train_configs.json:

$ python3 /path/to/l2hmc-qcd/l2hmc-qcd/train.py --json_file=/path/to/l2hmc-qcd/bin/train_configs.json

Or via the bin/train.sh script provided in bin/.

Features

  • Distributed training (via horovod): If horovod is installed, the model can be trained across multiple GPUs (or CPUs) by:

    #!/bin/bash
    
    TRAINER=/path/to/l2hmc-qcd/l2hmc-qcd/train.py
    JSON_FILE=/path/to/l2hmc-qcd/bin/train_configs.json
    
    horovodrun -np ${PROCS} python3 ${TRAINER} --json_file=${JSON_FILE}

Contact


Code author: Sam Foreman

Pull requests and issues should be directed to: saforem2

Citation

If you use this code or found this work interesting, please cite our work along with the original paper:

@misc{foreman2021deep,
      title={Deep Learning Hamiltonian Monte Carlo}, 
      author={Sam Foreman and Xiao-Yong Jin and James C. Osborn},
      year={2021},
      eprint={2105.03418},
      archivePrefix={arXiv},
      primaryClass={hep-lat}
}
@article{levy2017generalizing,
  title={Generalizing Hamiltonian Monte Carlo with Neural Networks},
  author={Levy, Daniel and Hoffman, Matthew D. and Sohl-Dickstein, Jascha},
  journal={arXiv preprint arXiv:1711.09268},
  year={2017}
}

Acknowledgement

This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under contract DE_AC02-06CH11357. This work describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the work do not necessarily represent the views of the U.S. DOE or the United States Government. Declaration of Interests - None.

Hits

Stargazers over time

Comments
  • Remove upper bound on python_requires

    Remove upper bound on python_requires

    (I'm moving between meetings so can iterate on this more later, so excuse the very brief Issue for now).

    At the moment the project has an upper bound on python_requires

    https://github.com/saforem2/l2hmc-qcd/blob/2eb6ee63cc0c53b187e6d716f4c12f418c8b8515/setup.py#L165

    Assuming that you're intending l2hmc to be a library and not an application, then I would highly recommend removing this for the reasons summarized in Henry's detailed blog post on the subject.

    Congrats on getting l2hmc up on PyPI though! :snake: :rocket:

    opened by matthewfeickert 2
  • Alpha

    Alpha

    Pull upstream alpha branch into main

    Major changes

    • new src/ hierarchical module organization
    • Contains skeleton implementation of 4D SU(3) lattice gauge model
    • Framework independent configuration
      • Unified configuration system simplifies logic, same configs used for both tensorflow and pytorch experiments
      • Plan to be able to specify which backend to use through config option
    • Unified (and framework independent) configurations between tensorflow and pytorch implementations

    Note: This is still very much a WIP. Many existing features still need to be re-implemented / updated into new code in src/.

    Todo

    • [ ] Write unit tests
    • [ ] Use simple configs for end-to-end workflow test + integrate into CI
    • [ ] dynamic learning rate scheduling
    • [ ] Test 4D SU(3) numpy code
    • [ ] Write tensorflow and pytorch implementations of LatticeSU3 objects
    • [ ] Improved / simplified ( / trainable?) annealing schedule
    • [ ] Distributed training support
      • [ ] horovod
      • [ ] DDP for pytorch implementation
      • [ ] DeepSpeed from Microsoft??
    • [ ] Testing / inference logic
    • [ ] Automatic checkpointing
    • [ ] Metric logging
      • [ ] Tensorboard?
      • [ ] Sacred?
      • [ ] build custom dashboard? plot.ly?
    • [ ] Setup packaging / distribution through pip
    • [ ] Resolve issue
    opened by saforem2 1
  • Alpha

    Alpha

    opened by saforem2 1
  • Rich

    Rich

    General improvements, rewrote logging methods to use Rich for better formatting.

    • Adds dynamic (trainable) step size eps for each separate x and v updates, seems to generally increase the total energy towards the middle of the trajectory but it remains unclear if this corresponds to an improvement in the tunneling rate
    • Adds methods for calculating autocorrelations of the topological charge, as well as notebooks for generating the plots
    • Updates to the writeup in doc/main.pdf
    • Will likely be last changes to writeup before public release of official draft
    opened by saforem2 1
  • Dev

    Dev

    • Updates to README

    • Ability to load network with new training instance

    • Updates to doc/, removes old sections related to debugging the bias in the plaquette

    opened by saforem2 1
  • Saveable model

    Saveable model

    Complete rewrite of dynamics.xnet and dynamics.vnet models to use tf.keras.functional Models.

    Additional changes include:

    • Non-Compact Projection update for gauge fields
    • Ability to specify convolution structure to be prepended at beginning of gauge network
    opened by saforem2 1
  • Dev

    Dev

    Removes models/gauge_model.py entirely.

    Instead, a base dynamics class is implemented in dynamics/dynamics.py, and an example subclass is provided in dynamics/gauge_dynamics.py.

    opened by saforem2 1
  • Split networks

    Split networks

    Major rewrite of existing codebase.

    This pull request updates everything to be compatible with tensorflow >= 2.2 and removes a bunch of redundant legacy code.

    opened by saforem2 1
  • Dev

    Dev

    • Dynamics object is now compatible with tf >= 2.0
    • Running inference on trained model with tensorflow now creates identical graphs and summary files to numpy inference code
    • Inference with numpy now uses object oriented structure
    • Adds LaTeX + PDF documentation in doc/
    opened by saforem2 1
  • Cooley dev

    Cooley dev

    Adds new GaugeNetwork architecture as the default for training GaugeModel

    Additionally, replaces pickle with joblib for saving data as .z compressed files (as opposed to .pkl files).

    opened by saforem2 1
  • Testing

    Testing

    Implemented nnehmc_loss calculation for an alternative loss function using the approach suggested in https://infoscience.epfl.ch/record/264887/files/robust_parameter_estimation.pdf.

    This modified loss function can be chosen (instead of the standard loss described in the original paper) by passing --use_nnehmc_loss as a command line argument.

    opened by saforem2 1
  • Packaging and PyPI distribution?

    Packaging and PyPI distribution?

    As you've made a library and are using it as such:

    # snippet from toy_distributions.ipynb
    
    # append parent directory to `sys.path`
    # to load from modules in `../l2hmc-qcd/`
    module_path = os.path.join('..')
    if module_path not in sys.path:
        sys.path.append(module_path)
    
    # Local imports
    from utils.attr_dict import AttrDict
    from utils.training_utils import train_dynamics
    from dynamics.config import DynamicsConfig
    from dynamics.base_dynamics import BaseDynamics
    from dynamics.generic_dynamics import GenericDynamics
    from network.config import LearningRateConfig
    from config import (State, NetWeights, MonteCarloStates,
                        BASE_DIR, BIN_DIR, TF_FLOAT)
    
    from utils.distributions import (plot_samples2D, contour_potential,
                                     two_moons_potential, sin_potential,
                                     sin_potential1, sin_potential2)
    

    do you have any plans and/or interest in packaging it as a Python library so it can either be pip installed from GitHub or be distributed on PyPI?

    opened by matthewfeickert 5
Releases(0.12.0)
Owner
Sam Foreman
Computational science Postdoc at Argonne National Laboratory working on applying machine learning to simulations in lattice QCD.
Sam Foreman
[CVPR'21 Oral] Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning

Seeing Out of tHe bOx: End-to-End Pre-training for Vision-Language Representation Learning [CVPR'21, Oral] By Zhicheng Huang*, Zhaoyang Zeng*, Yupan H

Multimedia Research 196 Dec 13, 2022
Machine Learning Toolkit for Kubernetes

Kubeflow the cloud-native platform for machine learning operations - pipelines, training and deployment. Documentation Please refer to the official do

Kubeflow 12.1k Jan 03, 2023
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon

42 Dec 02, 2022
Project code for weakly supervised 3D object detectors using wide-baseline multi-view traffic camera data: WIBAM.

WIBAM (Work in progress) Weakly Supervised Training of Monocular 3D Object Detectors Using Wide Baseline Multi-view Traffic Camera Data 3D object dete

Matthew Howe 10 Aug 24, 2022
Tensorforce: a TensorFlow library for applied reinforcement learning

Tensorforce: a TensorFlow library for applied reinforcement learning Introduction Tensorforce is an open-source deep reinforcement learning framework,

Tensorforce 3.2k Jan 02, 2023
This is the code related to "Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation" (ICCV 2021).

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation This is the code relat

39 Sep 23, 2022
Official pytorch code for "APP: Anytime Progressive Pruning"

APP: Anytime Progressive Pruning Diganta Misra1,2,3, Bharat Runwal2,4, Tianlong Chen5, Zhangyang Wang5, Irina Rish1,3 1 Mila - Quebec AI Institute,2 L

Landskape AI 12 Nov 22, 2022
LyaNet: A Lyapunov Framework for Training Neural ODEs

LyaNet: A Lyapunov Framework for Training Neural ODEs Provide the model type--config-name to train and test models configured as those shown in the pa

Ivan Dario Jimenez Rodriguez 21 Nov 21, 2022
2021-MICCAI-Progressively Normalized Self-Attention Network for Video Polyp Segmentation

2021-MICCAI-Progressively Normalized Self-Attention Network for Video Polyp Segmentation Authors: Ge-Peng Ji*, Yu-Cheng Chou*, Deng-Ping Fan, Geng Che

Ge-Peng Ji (Daniel) 85 Dec 30, 2022
The official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness.

This repository is the official implementation of A Unified Game-Theoretic Interpretation of Adversarial Robustness. Requirements pip install -r requi

Jie Ren 17 Dec 12, 2022
This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack".

Generative Dynamic Patch Attack This reposityory contains the PyTorch implementation of our paper "Generative Dynamic Patch Attack". Requirements PyTo

Xiang Li 8 Nov 17, 2022
Users can free try their models on SIDD dataset based on this code

SIDD benchmark 1 Train python train.py If you want to train your network, just modify the yaml in the options folder. 2 Validation python validation.p

Yuzhi ZHAO 2 May 20, 2022
Implementation of Pix2Seq in PyTorch

pix2seq-pytorch Implementation of Pix2Seq paper Different from the paper image input size 1280 bin size 1280 LambdaLR scheduler used instead of Linear

Tony Shin 9 Dec 15, 2022
“袋鼯麻麻——智能购物平台”能够精准地定位识别每一个商品

“袋鼯麻麻——智能购物平台”能够精准地定位识别每一个商品,并且能够返回完整地购物清单及顾客应付的实际商品总价格,极大地降低零售行业实际运营过程中巨大的人力成本,提升零售行业无人化、自动化、智能化水平。

thomas-yanxin 192 Jan 05, 2023
Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis in JAX

SYMPAIS: Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis Overview | Installation | Documentation | Examples | Notebo

Yicheng Luo 4 Sep 13, 2022
Writeups for the challenges from DownUnderCTF 2021

cloud Challenge Author Difficulty Release Round Bad Bucket Blue Alder easy round 1 Not as Bad Bucket Blue Alder easy round 1 Lost n Found Blue Alder m

DownUnderCTF 161 Dec 31, 2022
PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020).

NHDRRNet-PyTorch This is the PyTorch implementation of Deep HDR Imaging via A Non-Local Network (TIP 2020). 0. Differences between Original Paper and

Yutong Zhang 1 Mar 01, 2022
code for generating data set ES-ImageNet with corresponding training code

es-imagenet-master code for generating data set ES-ImageNet with corresponding training code dataset generator some codes of ODG algorithm The variabl

Ordinarabbit 18 Dec 25, 2022
Official repository for "PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long Text Generation"

pair-emnlp2020 Official repository for the paper: Xinyu Hua and Lu Wang: PAIR: Planning and Iterative Refinement in Pre-trained Transformers for Long

Xinyu Hua 31 Oct 13, 2022
A DNN inference latency prediction toolkit for accurately modeling and predicting the latency on diverse edge devices.

Note: This is an alpha (preview) version which is still under refining. nn-Meter is a novel and efficient system to accurately predict the inference l

Microsoft 244 Jan 06, 2023