Application of the L2HMC algorithm to simulations in lattice QCD.

Overview

l2hmc-qcd CodeFactor

πŸ“Š Slides

πŸ“’ Example Notebook


Overview

The L2HMC algorithm aims to improve upon HMC by optimizing a carefully chosen loss function which is designed to minimize autocorrelations within the Markov Chain, thereby improving the efficiency of the sampler.

This work is based on the original implementation: brain-research/l2hmc/.

A detailed description of the L2HMC algorithm can be found in the paper:

Generalizing Hamiltonian Monte Carlo with Neural Network

by Daniel Levy, Matt D. Hoffman and Jascha Sohl-Dickstein.

Broadly, given an analytically described target distribution, Ο€(x), L2HMC provides a statistically exact sampler that:

  • Quickly converges to the target distribution (fast burn-in).
  • Quickly produces uncorrelated samples (fast mixing).
  • Is able to efficiently mix between energy levels.
  • Is capable of traversing low-density zones to mix between modes (often difficult for generic HMC).

L2HMC for LatticeQCD

Goal: Use L2HMC to efficiently generate gauge configurations for calculating observables in lattice QCD.

A detailed description of the (ongoing) work to apply this algorithm to simulations in lattice QCD (specifically, a 2D U(1) lattice gauge theory model) can be found in doc/main.pdf.

l2hmc-qcd poster

Organization

Dynamics / Network

The base class for the augmented L2HMC leapfrog integrator is implemented in the BaseDynamics (a tf.keras.Model object).

The GaugeDynamics is a subclass of BaseDynamics containing modifications for the 2D U(1) pure gauge theory.

The network is defined in l2hmc-qcd/network/functional_net.py.

Network Architecture

An illustration of the leapfrog layer updating (x, v) --> (x', v') can be seen below.

leapfrog layer

Lattice

Lattice code can be found in lattice.py, specifically the GaugeLattice object that provides the base structure on which our target distribution exists.

Additionally, the GaugeLattice object implements a variety of methods for calculating physical observables such as the average plaquette, ΙΈβ‚š, and the topological charge Q,

Training

The training loop is implemented in l2hmc-qcd/utils/training_utils.py .

To train the sampler on a 2D U(1) gauge model using the parameters specified in bin/train_configs.json:

$ python3 /path/to/l2hmc-qcd/l2hmc-qcd/train.py --json_file=/path/to/l2hmc-qcd/bin/train_configs.json

Or via the bin/train.sh script provided in bin/.

Features

  • Distributed training (via horovod): If horovod is installed, the model can be trained across multiple GPUs (or CPUs) by:

    #!/bin/bash
    
    TRAINER=/path/to/l2hmc-qcd/l2hmc-qcd/train.py
    JSON_FILE=/path/to/l2hmc-qcd/bin/train_configs.json
    
    horovodrun -np ${PROCS} python3 ${TRAINER} --json_file=${JSON_FILE}

Contact


Code author: Sam Foreman

Pull requests and issues should be directed to: saforem2

Citation

If you use this code or found this work interesting, please cite our work along with the original paper:

@misc{foreman2021deep,
      title={Deep Learning Hamiltonian Monte Carlo}, 
      author={Sam Foreman and Xiao-Yong Jin and James C. Osborn},
      year={2021},
      eprint={2105.03418},
      archivePrefix={arXiv},
      primaryClass={hep-lat}
}
@article{levy2017generalizing,
  title={Generalizing Hamiltonian Monte Carlo with Neural Networks},
  author={Levy, Daniel and Hoffman, Matthew D. and Sohl-Dickstein, Jascha},
  journal={arXiv preprint arXiv:1711.09268},
  year={2017}
}

Acknowledgement

This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under contract DE_AC02-06CH11357. This work describes objective technical results and analysis. Any subjective views or opinions that might be expressed in the work do not necessarily represent the views of the U.S. DOE or the United States Government. Declaration of Interests - None.

Hits

Stargazers over time

Comments
  • Remove upper bound on python_requires

    Remove upper bound on python_requires

    (I'm moving between meetings so can iterate on this more later, so excuse the very brief Issue for now).

    At the moment the project has an upper bound on python_requires

    https://github.com/saforem2/l2hmc-qcd/blob/2eb6ee63cc0c53b187e6d716f4c12f418c8b8515/setup.py#L165

    Assuming that you're intending l2hmc to be a library and not an application, then I would highly recommend removing this for the reasons summarized in Henry's detailed blog post on the subject.

    Congrats on getting l2hmc up on PyPI though! :snake: :rocket:

    opened by matthewfeickert 2
  • Alpha

    Alpha

    Pull upstream alpha branch into main

    Major changes

    • new src/ hierarchical module organization
    • Contains skeleton implementation of 4D SU(3) lattice gauge model
    • Framework independent configuration
      • Unified configuration system simplifies logic, same configs used for both tensorflow and pytorch experiments
      • Plan to be able to specify which backend to use through config option
    • Unified (and framework independent) configurations between tensorflow and pytorch implementations

    Note: This is still very much a WIP. Many existing features still need to be re-implemented / updated into new code in src/.

    Todo

    • [ ] Write unit tests
    • [ ] Use simple configs for end-to-end workflow test + integrate into CI
    • [ ] dynamic learning rate scheduling
    • [ ] Test 4D SU(3) numpy code
    • [ ] Write tensorflow and pytorch implementations of LatticeSU3 objects
    • [ ] Improved / simplified ( / trainable?) annealing schedule
    • [ ] Distributed training support
      • [ ] horovod
      • [ ] DDP for pytorch implementation
      • [ ] DeepSpeed from Microsoft??
    • [ ] Testing / inference logic
    • [ ] Automatic checkpointing
    • [ ] Metric logging
      • [ ] Tensorboard?
      • [ ] Sacred?
      • [ ] build custom dashboard? plot.ly?
    • [ ] Setup packaging / distribution through pip
    • [ ] Resolve issue
    opened by saforem2 1
  • Alpha

    Alpha

    opened by saforem2 1
  • Rich

    Rich

    General improvements, rewrote logging methods to use Rich for better formatting.

    • Adds dynamic (trainable) step size eps for each separate x and v updates, seems to generally increase the total energy towards the middle of the trajectory but it remains unclear if this corresponds to an improvement in the tunneling rate
    • Adds methods for calculating autocorrelations of the topological charge, as well as notebooks for generating the plots
    • Updates to the writeup in doc/main.pdf
    • Will likely be last changes to writeup before public release of official draft
    opened by saforem2 1
  • Dev

    Dev

    • Updates to README

    • Ability to load network with new training instance

    • Updates to doc/, removes old sections related to debugging the bias in the plaquette

    opened by saforem2 1
  • Saveable model

    Saveable model

    Complete rewrite of dynamics.xnet and dynamics.vnet models to use tf.keras.functional Models.

    Additional changes include:

    • Non-Compact Projection update for gauge fields
    • Ability to specify convolution structure to be prepended at beginning of gauge network
    opened by saforem2 1
  • Dev

    Dev

    Removes models/gauge_model.py entirely.

    Instead, a base dynamics class is implemented in dynamics/dynamics.py, and an example subclass is provided in dynamics/gauge_dynamics.py.

    opened by saforem2 1
  • Split networks

    Split networks

    Major rewrite of existing codebase.

    This pull request updates everything to be compatible with tensorflow >= 2.2 and removes a bunch of redundant legacy code.

    opened by saforem2 1
  • Dev

    Dev

    • Dynamics object is now compatible with tf >= 2.0
    • Running inference on trained model with tensorflow now creates identical graphs and summary files to numpy inference code
    • Inference with numpy now uses object oriented structure
    • Adds LaTeX + PDF documentation in doc/
    opened by saforem2 1
  • Cooley dev

    Cooley dev

    Adds new GaugeNetwork architecture as the default for training GaugeModel

    Additionally, replaces pickle with joblib for saving data as .z compressed files (as opposed to .pkl files).

    opened by saforem2 1
  • Testing

    Testing

    Implemented nnehmc_loss calculation for an alternative loss function using the approach suggested in https://infoscience.epfl.ch/record/264887/files/robust_parameter_estimation.pdf.

    This modified loss function can be chosen (instead of the standard loss described in the original paper) by passing --use_nnehmc_loss as a command line argument.

    opened by saforem2 1
  • Packaging and PyPI distribution?

    Packaging and PyPI distribution?

    As you've made a library and are using it as such:

    # snippet from toy_distributions.ipynb
    
    # append parent directory to `sys.path`
    # to load from modules in `../l2hmc-qcd/`
    module_path = os.path.join('..')
    if module_path not in sys.path:
        sys.path.append(module_path)
    
    # Local imports
    from utils.attr_dict import AttrDict
    from utils.training_utils import train_dynamics
    from dynamics.config import DynamicsConfig
    from dynamics.base_dynamics import BaseDynamics
    from dynamics.generic_dynamics import GenericDynamics
    from network.config import LearningRateConfig
    from config import (State, NetWeights, MonteCarloStates,
                        BASE_DIR, BIN_DIR, TF_FLOAT)
    
    from utils.distributions import (plot_samples2D, contour_potential,
                                     two_moons_potential, sin_potential,
                                     sin_potential1, sin_potential2)
    

    do you have any plans and/or interest in packaging it as a Python library so it can either be pip installed from GitHub or be distributed on PyPI?

    opened by matthewfeickert 5
Releases(0.12.0)
Owner
Sam Foreman
Computational science Postdoc at Argonne National Laboratory working on applying machine learning to simulations in lattice QCD.
Sam Foreman
A simple log parser and summariser for IIS web server logs

IISLogFileParser A basic parser tool for IIS Logs which summarises findings from the log file. Inspired by the Gist https://gist.github.com/wh13371/e7

2 Mar 26, 2022
Read and write layered TIFF ImageSourceData and ImageResources tags

Read and write layered TIFF ImageSourceData and ImageResources tags Psdtags is a Python library to read and write the Adobe Photoshop(r) specific Imag

Christoph Gohlke 4 Feb 05, 2022
ReferFormer - Official Implementation of ReferFormer

The official implementation of the paper: Language as Queries for Referring Vide

Jonas Wu 232 Dec 29, 2022
Official repository for the NeurIPS 2021 paper Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided curriculum Learning Approach

Get Fooled for the Right Reason Official repository for the NeurIPS 2021 paper Get Fooled for the Right Reason: Improving Adversarial Robustness throu

Sowrya Gali 1 Apr 25, 2022
A benchmark dataset for mesh multi-label-classification based on cube engravings introduced in MeshCNN

Double Cube Engravings This script creates a dataset for multi-label mesh clasification, with an intentionally difficult setup for point cloud classif

Yotam Erel 1 Nov 30, 2021
The pytorch implementation of SOKD (BMVC2021).

Semi-Online Knowledge Distillation Implementations of SOKD. Requirements This repo was tested with Python 3.8, PyTorch 1.5.1, torchvision 0.6.1, CUDA

4 Dec 19, 2021
Pre-trained NFNets with 99% of the accuracy of the official paper

NFNet Pytorch Implementation This repo contains pretrained NFNet models F0-F6 with high ImageNet accuracy from the paper High-Performance Large-Scale

Benjamin Schmidt 133 Dec 09, 2022
Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021)

Neural-Pull: Learning Signed Distance Functions from Point Clouds by Learning to Pull Space onto Surfaces(ICML 2021) This repository contains the code

149 Dec 15, 2022
Pytorch implementation of NeurIPS 2021 paper: Geometry Processing with Neural Fields.

Geometry Processing with Neural Fields Pytorch implementation for the NeurIPS 2021 paper: Geometry Processing with Neural Fields Guandao Yang, Serge B

Guandao Yang 162 Dec 16, 2022
Code for paper: Towards Tokenized Human Dynamics Representation

Video Tokneization Codebase for video tokenization, based on our paper Towards Tokenized Human Dynamics Representation. Prerequisites (tested under Py

Kenneth Li 20 May 31, 2022
Vision Transformer for 3D medical image registration (Pytorch).

ViT-V-Net: Vision Transformer for Volumetric Medical Image Registration keywords: vision transformer, convolutional neural networks, image registratio

Junyu Chen 192 Dec 20, 2022
This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit

BMW Semantic Segmentation GPU/CPU Inference API This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit. The train

BMW TechOffice MUNICH 56 Nov 24, 2022
Here I will explain the flow to deploy your custom deep learning models on Ultra96V2.

Xilinx_Vitis_AI This repo will help you to Deploy your Deep Learning Model on Ultra96v2 Board. Prerequisites Vitis Core Development Kit 2019.2 This co

Amin Mamandipoor 1 Feb 08, 2022
Implementation of Nalbach et al. 2017 paper.

Deep Shading Convolutional Neural Networks for Screen-Space Shading Our project is based on Nalbach et al. 2017 paper. In this project, a set of buffe

Marcel Santana 17 Sep 08, 2022
Hierarchical Clustering: O(1)-Approximation for Well-Clustered Graphs

Hierarchical Clustering: O(1)-Approximation for Well-Clustered Graphs This repository contains code to accompany the paper "Hierarchical Clustering: O

3 Sep 25, 2022
This project uses ViT to perform image classification tasks on DATA set CIFAR10.

Vision-Transformer-Multiprocess-DistributedDataParallel-Apex Introduction This project uses ViT to perform image classification tasks on DATA set CIFA

Kaicheng Yang 3 Jun 03, 2022
Model Zoo for AI Model Efficiency Toolkit

We provide a collection of popular neural network models and compare their floating point and quantized performance.

Qualcomm Innovation Center 137 Jan 03, 2023
Contains modeling practice materials and homework for the Computational Neuroscience course at Okinawa Institute of Science and Technology

A310 Computational Neuroscience - Okinawa Institute of Science and Technology, 2022 This repository contains modeling practice materials and homework

Sungho Hong 1 Jan 24, 2022
Self-Learning - Books Papers, Courses & more I have to learn soon

Self-Learning This repository is intended to be used for personal use, all rights reserved to respective owners, please cite original authors and ask

Achint Chaudhary 968 Jan 02, 2022
Code for "On Memorization in Probabilistic Deep Generative Models"

On Memorization in Probabilistic Deep Generative Models This repository contains the code necessary to reproduce the experiments in On Memorization in

The Alan Turing Institute 3 Jun 09, 2022