Generative Adversarial Networks for High Energy Physics extended to a multi-layer calorimeter simulation

Overview

CaloGAN

Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks.

This repository contains what you'll need to reproduce M. Paganini (@mickypaganini), L. de Oliveira (@lukedeo), B. Nachman (@bnachman), CaloGAN: Simulating 3D High Energy Particle Showers in Multi-Layer Electromagnetic Calorimeters with Generative Adversarial Networks [arXiv:1705.02355].

You are more than welcome to use the open data and open-source software provided here for any of your projects, but we kindly ask you that you please cite them using the DOIs provided below:

Asset Location
Training Data (GEANT4 showers, ⟂ to center) DOI
Source Code (this repo!) DOI

For any use of paper ideas and results, please cite

@article{paganini_calogan,
      author         = "Paganini, Michela and de Oliveira, Luke and Nachman,
                        Benjamin",
      title          = "{CaloGAN: Simulating 3D High Energy Particle Showers in
                        Multi-Layer Electromagnetic Calorimeters with Generative
                        Adversarial Networks}",
      year           = "2017",
      eprint         = "1705.02355",
      archivePrefix  = "arXiv",
      primaryClass   = "hep-ex",
}

Goal

The goal of this project is to help physicists at CERN speed up their simulations by encoding the most computationally expensive portion of the simulation process (i.e., showering in the EM calorimeter) in a deep generative model.

The challenges come from the fact that this portion of the detector is longitudinally segmented into three layers with heterogeneous granularity. For simplicity, we can visualize the energy depositions of particles passing through the detector as a series of three images per shower, while keeping in mind the sequential nature of their relationship in the generator.

To download and better understand the training dataset, visit the Mendeley Data repository.

3D shower in the EM calorimeter

Getting Started

This repository contains three main folders: generation, models and analysis, which represent the three main stages in this project. You may decide to only engage in a subset of them, so we decided to provide you with all you need to jump right in at the level you are interested in.

generation contains the code to build the electromagnetic calorimeter geometry and shoot particles at it with a give energy. This is all based on Geant4. For more instructions, go to Generation on PDSF.

models contains the core ML code for this project. The file train.py takes care of loading the data and training the GAN.

analysis contains Jupyter Notebooks used to evaluate performance and produce all plots for the paper.

GEANT4 Generation on PDSF

On PDSF, your should run all generation code that outputs big files to a scratch space. To make a scratch environment, run mkdir /global/projecta/projectdirs/atlas/{your-username}, and for convenience, link to home via ln -s /global/projecta/projectdirs/atlas/{your-username} ~/scratch.

To build the generation code on PDSF, simply run source cfg/pdsf-env.sh from the generation/ folder in the repository. This loads modules.

Next, you can type make which should build an executable called generate. Because of how Geant4 works, this executable gets deposited in $HOME/geant4_workdir/bin/Linux-g++/, which is in your $PATH when the modules from cfg/pdsf-env.sh are loaded.

To run the generation script, run generate -m cfg/run2.mac. You can change generation parameters inside cfg/run2.mac (follow these instructions).

This will output a file called plz_work_kthxbai.root with a TTree named fancy_tree, which will contain a branch for each calorimeter cell (cell_#) with histograms of the energy deposited in that cell across the various shower events. The last three cells (numbered 504, 505, and 506) actually represent the overflow for each calorimeter layer. Finally, a branch called TotalEnergy is added for bookkeeping.

We provide you with a convenient script to convert the ROOT file into a more manageable HDF5 archive. The convert.py script is located in the generation/ folder and can be used as follows:

usage: convert.py [-h] --in-file IN_FILE --out-file OUT_FILE --tree TREE

Convert GEANT4 output files into ML-able HDF5 files

optional arguments:
  -h, --help            show this help message and exit
  --in-file IN_FILE, -i IN_FILE
                        input ROOT file
  --out-file OUT_FILE, -o OUT_FILE
                        output HDF5 file
  --tree TREE, -t TREE  input tree for the ROOT file

So you can run, for example, python convert.py --in-file plz_work_kthxbai.root --out-file test.h5 --tree fancy_tree. Assuming you specified /run/beamOn 1000 in your run.mac file, the structure of the output HDF5 should look like this:

energy                   Dataset {1000, 1}
layer_0                  Dataset {1000, 3, 96}
layer_1                  Dataset {1000, 12, 12}
layer_2                  Dataset {1000, 12, 6}
overflow                 Dataset {1000, 3}

To launch a batch job on PDSF, simply run ./launch <num_jobs>, to launch <num_jobs> concurrent tasks in a job array.

The CaloGAN Model

This work builds on the solution presented in arXiv/1701.05927 which we named LAGAN, or Location-Aware Generative Adversarial Network. This is a Physics specific modification of the more common DCGAN and ACGAN frameworks which is specifically designed to handle the levels of sparsity, the location dependence, and the high dynamic range that characterizes Physics images.

Generator

Generator The Generator contains three parallel LAGAN-style streams with an in-painting mechanism to handle the sequential nature of these shower images.

Discriminator

Discriminator The discriminator uses convolutional features combined with ad-hoc notions of sparsity and reconstructed energy to classify real and fake images as well as comparing the reconstructred energy with the condition requested by the user.

Training the CaloGAN model

To begin the training process, create a YAML file with the specification in models/particles.yaml for all the particles you want to train on. In the paper, we only train one GAN per particle type. If you specify more than one particle type and dataset in the YAML file, it will train an ACGAN, which we found to not be as performant. Assuming that you have a folder called data/ in the root directory of the project, and you have a file inside called eplus.h5 (downloaded from Mendeley, perhaps?), you can train your own CaloGAN by running

python -m models.train models/particles.yaml

We recommend running python -m models.train -h at least once to see all the parameters one can change.

Performance Analysis

Performance evaluation is done both from a qualitative and a quantitative standpoint. The jupyter notebook available in the analysis folder will guide you through our plotting conventions.

For quick handling, we have pre-extracted the shower shape variables into a pandas dataframe (stored as HDF5) and made it available available on S3. To load, you can simply do pd.read_hdf('path/to/shower-shapes.h5').

Docker

Running in local :

$ docker run -it --rm -v $PWD/CaloGAN:/home/CaloGAN engineren/calogan-docker python -m models.train models/particles.yaml

Running naf-ilc-gpu :

$ singularity pull docker://engineren/calogan-docker:latest

$ singularity instance start --bind data:/home/CaloGAN/data --nv calogan-docker_latest.sif caloGAN

$ singularity run instance://caloGAN python -m models.train models/particles.yaml

Copyright Notice

“CaloGAN” Copyright (c) 2017, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy). All rights reserved.

If you have questions about your rights to use or distribute this software, please contact Berkeley Lab's Innovation & Partnerships Office at [email protected].

NOTICE. This Software was developed under funding from the U.S. Department of Energy and the U.S. Government consequently retains certain rights. As such, the U.S. Government has been granted for itself and others acting on its behalf a paid-up, nonexclusive, irrevocable, worldwide license in the Software to reproduce, distribute copies to the public, prepare derivative works, and perform publicly and display publicly, and to permit other to do so.

You might also like...
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch
Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) in PyTorch

alias-free-gan-pytorch Unofficial implementation of Alias-Free Generative Adversarial Networks. (https://arxiv.org/abs/2106.12423) This implementation

PyTorch implementations of Generative Adversarial Networks.
PyTorch implementations of Generative Adversarial Networks.

This repository has gone stale as I unfortunately do not have the time to maintain it anymore. If you would like to continue the development of it as

Image Deblurring using Generative Adversarial Networks
Image Deblurring using Generative Adversarial Networks

DeblurGAN arXiv Paper Version Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks. Our netwo

Code for the paper "TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks"

TadGAN: Time Series Anomaly Detection Using Generative Adversarial Networks This is a Python3 / Pytorch implementation of TadGAN paper. The associated

Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary Differential Equations
Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary Differential Equations

ODE GAN (Prototype) in PyTorch Partial implementation of ODE-GAN technique from the paper Training Generative Adversarial Networks by Solving Ordinary

Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks
Pytorch implementation for reproducing StackGAN_v2 results in the paper StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN-v2 StackGAN-v1: Tensorflow implementation StackGAN-v1: Pytorch implementation Inception score evaluation Pytorch implementation for reproduci

Code for
Code for "On the Effects of Batch and Weight Normalization in Generative Adversarial Networks"

Note: this repo has been discontinued, please check code for newer version of the paper here Weight Normalized GAN Code for the paper "On the Effects

PyTorch implementation of
PyTorch implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN in PyTorch PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. * All samples in READM

Official implementation of
Official implementation of "Learning to Discover Cross-Domain Relations with Generative Adversarial Networks"

DiscoGAN Official PyTorch implementation of Learning to Discover Cross-Domain Relations with Generative Adversarial Networks. Prerequisites Python 2.7

Comments
  • Crash: `You must feed a value for placeholder tensor 'z' with dtype float`

    Crash: `You must feed a value for placeholder tensor 'z' with dtype float`

    Hello,

    First, thanks for the very nice repository and documentation. I've been trying to run the train.py module and running into trouble.

    You can try to replicate my error by doing the following...

    This will set up my software environment:

    • git clone https://github.com/gnperdue/BashDotFiles.git
    • cd BashDotFiles/anaconda_configs && bash miniconda_py2dl.sh
    • export PATH=$HOME/miniconda2/bin && source activate py2dl

    I have a version checker there that shows what this pulls down for this particular setup:

    $ python pyverchecker.py
    Python version: 2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:05:08)
    [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
    pandas version: 0.20.1
    matplotlib version: 2.0.2
    numpy version: 1.12.1
    scipy version: 0.19.0
    IPython version: 5.3.0
    sklearn version: 0.18.1
    skimage version: 0.13.0
    Missing protobuf
    Missing numexpr
    Missing sympy
    tensorflow version: 1.0.0
    pymc3 version: 3.0
    theano version: 0.9.0
    h5py version: 2.7.0
    Using TensorFlow backend.
    keras version: 2.0.2
    Possible problem with module: xlrd, 'module' object has no attribute '__version__'
    Missing lasagne
    yaml version: 3.12
    Missing nltk
    

    Next, I retrieved the data you provided and updated the models/particles.yaml file:

    # put the paths to actual data here!
    positron: 'data/eplus.hdf5'
    # pion: 'data/piplus.hdf5'
    # gamma: 'data/gamma.hdf5'
    

    I also opened the file and poked around and everything looks fine. My Keras configuration is like so:

    .keras$ more keras.json
    {
        "epsilon": 1e-07,
        "floatx": "float32",
        "image_data_format": "channels_last",
        "backend": "tensorflow"
    }
    

    python -m models.train -h produces sensible results. However, python -m models.train models/particles.yaml produces output like:

    (py2dl) CaloGAN$ python -m models.train models/particles.yaml
    Using TensorFlow backend.
    2017-05-30 17:32:16,289 - models.train[INFO]: 1 particle types found.
    2017-05-30 17:32:16,526 - models.train[INFO]: Building discriminator
    ...
    Caused by op u'z', defined at:
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/runpy.py", line 174, in _run_module_as_main
        "__main__", fname, loader, pkg_name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/runpy.py", line 72, in _run_code
        exec code in run_globals
      File "/Users/perdue/Dropbox/ArtificialIntelligence/DSHEP/CaloGAN/models/train.py", line 302, in <module>
        latent = Input(shape=(latent_size, ), name='z')
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/keras/engine/topology.py", line 1388, in Input
        input_tensor=tensor)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/keras/engine/topology.py", line 1299, in __init__
        name=self.name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 349, in placeholder
        x = tf.placeholder(dtype, shape=shape, name=name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/ops/array_ops.py", line 1520, in placeholder
        name=name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 2149, in _placeholder
        name=name)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
        op_def=op_def)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
        original_op=self._default_original_op, op_def=op_def)
      File "/Users/perdue/miniconda2/envs/py2dl/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1264, in __init__
        self._traceback = _extract_stack()
    
    InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'z' with dtype float
    [[Node: z = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
    

    I walked through the code in the debugger for a while and could not figure out what was causing the crash. It isn't clear to me how the latent input (named 'z', and part of generator_inputs in the generator network definition) enters where it does, as the crash is here, in discriminator.train_on_batch:

    426
    427             real_batch_loss = discriminator.train_on_batch(
    428                 [image_batch_1, image_batch_2, image_batch_3, energy_batch],
    429                 disc_outputs_real,
    430                 loss_weights
    431             )
    432
    

    Any thoughts or suggestions would be deeply appreciated. Thanks!

    pax Gabe

    opened by gnperdue 2
  • load_weights

    load_weights

    The weight folder is missing. The jupyter notebook try to load the weight from a file called g_epoch_049.hdf5 which cannot be found anywhere in the repo. Also most of the PDSF urls are broken.

    opened by saras022 2
  • ValueError: The truth value of an array with more than one element is ambigious

    ValueError: The truth value of an array with more than one element is ambigious

    Hi! I meet another problem: Traceback (most recent call last): File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/hpcfs/bes/mlgpu/cuijb/CaloGAN/models/train.py", line 232, in sparsity_mbd=True File "/hpcfs/bes/mlgpu/cuijb/CaloGAN/models/architectures.py", line 111, in build_discriminator empirical_sparsity = sparsity_detector(image) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/site-packages/keras/engine/topology.py", line 596, in call output = self.call(inputs, **kwargs) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/site-packages/keras/layers/core.py", line 652, in call return self.function(inputs, **arguments) File "/hpcfs/bes/mlgpu/cuijb/CaloGAN/models/ops.py", line 105, in sparsity_level K.cast(x > 0.0, K.floatx()), axis=np.arange(1, len(_shape)) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 1189, in sum axis = _normalize_axis(axis, ndim(x)) File "/cvmfs/mlgpu.ihep.ac.cn/anaconda3/envs/tensorflow-gpu/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 1134, in _normalize_axis if axis is not None and axis < 0: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

    opened by Drjiabin 0
  • no module named 'ops'

    no module named 'ops'

    When I run this project ,it meet no module named 'ops',could you tell me how to solve this problem? I guess this problem is caused by the version of tensorflow?

    opened by Drjiabin 5
Releases(v1.0)
  • v1.0(May 30, 2017)

    This is a stable version of the code used to generate GEANT4 simulation data, train and evaluate the CaloGAN model, and produce analysis plots for publication.

    Source code(tar.gz)
    Source code(zip)
Owner
Deep Learning for HEP
Developing Deep Learning solutions for High Energy Physics
Deep Learning for HEP
NeurIPS 2021 paper 'Representation Learning on Spatial Networks' code

Representation Learning on Spatial Networks This repository is the official implementation of Representation Learning on Spatial Networks. Training Ex

13 Dec 29, 2022
CvT2DistilGPT2 is an encoder-to-decoder model that was developed for chest X-ray report generation.

CvT2DistilGPT2 Improving Chest X-Ray Report Generation by Leveraging Warm-Starting This repository houses the implementation of CvT2DistilGPT2 from [1

The Australian e-Health Research Centre 21 Dec 28, 2022
Contains a bunch of different python programm tasks

py_tasks Contains a bunch of different python programm tasks Armstrong.py - calculate Armsrong numbers in range from 0 to n with / without cache and c

Dmitry Chmerenko 1 Dec 17, 2021
AI-Bot - 一个基于watermelon改造的OpenAI-GPT-2的智能机器人

AI-Bot 一个基于watermelon改造的OpenAI-GPT-2的智能机器人 在Binder上直接运行测试 目前有两种实现方式 TF2的GPT-2 TF

9 Nov 16, 2022
Official implementation of "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers"

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers Figure 1: Performance of SegFormer-B0 to SegFormer-B5. Project page

NVIDIA Research Projects 1.4k Dec 31, 2022
Prompt Tuning with Rules

PTR Code and datasets for our paper "PTR: Prompt Tuning with Rules for Text Classification" If you use the code, please cite the following paper: @art

THUNLP 118 Dec 30, 2022
A disassembler for the RP2040 Programmable I/O State-machine!

piodisasm A disassembler for the RP2040 Programmable I/O State-machine! Usage Just run piodisasm.py on a file that contains the PIO code as hex! (Such

Ghidra Ninja 29 Dec 06, 2022
Implementation of Wasserstein adversarial attacks.

Stronger and Faster Wasserstein Adversarial Attacks Code for Stronger and Faster Wasserstein Adversarial Attacks, appeared in ICML 2020. This reposito

21 Oct 06, 2022
Companion repository to the paper accepted at the 4th ACM SIGSPATIAL International Workshop on Advances in Resilient and Intelligent Cities

Transfer learning approach to bicycle sharing systems station location planning using OpenStreetMap Companion repository to the paper accepted at the

Politechnika Wrocławska - repozytorium dla informatyków 4 Oct 24, 2022
A PyTorch implementation of SlowFast based on ICCV 2019 paper "SlowFast Networks for Video Recognition"

SlowFast A PyTorch implementation of SlowFast based on ICCV 2019 paper SlowFast Networks for Video Recognition. Requirements Anaconda PyTorch conda in

Hao Ren 8 Dec 23, 2022
S-attack library. Official implementation of two papers "Are socially-aware trajectory prediction models really socially-aware?" and "Vehicle trajectory prediction works, but not everywhere".

S-attack library: A library for evaluating trajectory prediction models This library contains two research projects to assess the trajectory predictio

VITA lab at EPFL 71 Jan 04, 2023
An implementation of the WHATWG URL Standard in JavaScript

whatwg-url whatwg-url is a full implementation of the WHATWG URL Standard. It can be used standalone, but it also exposes a lot of the internal algori

314 Dec 28, 2022
WORD: Revisiting Organs Segmentation in the Whole Abdominal Region

WORD: Revisiting Organs Segmentation in the Whole Abdominal Region. This repository provides the codebase and dataset for our work WORD: Revisiting Or

Healthcare Intelligence Laboratory 71 Jan 07, 2023
CharacterGAN: Few-Shot Keypoint Character Animation and Reposing

CharacterGAN Implementation of the paper "CharacterGAN: Few-Shot Keypoint Character Animation and Reposing" by Tobias Hinz, Matthew Fisher, Oliver Wan

Tobias Hinz 181 Dec 27, 2022
The PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.

Face Alignment in Full Pose Range: A 3D Total Solution By Jianzhu Guo. [Updates] 2020.8.30: The pre-trained model and code of ECCV-20 are made public

Jianzhu Guo 3.4k Jan 02, 2023
Neural machine translation between the writings of Shakespeare and modern English using TensorFlow

Shakespeare translations using TensorFlow This is an example of using the new Google's TensorFlow library on monolingual translation going from modern

Motoki Wu 245 Dec 28, 2022
A research toolkit for particle swarm optimization in Python

PySwarms is an extensible research toolkit for particle swarm optimization (PSO) in Python. It is intended for swarm intelligence researchers, practit

Lj Miranda 1k Dec 30, 2022
TensorFlow implementation of Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction)

Barlow-Twins-TF This repository implements Barlow Twins (Barlow Twins: Self-Supervised Learning via Redundancy Reduction) in TensorFlow and demonstrat

Sayak Paul 36 Sep 14, 2022
Alleviating Over-segmentation Errors by Detecting Action Boundaries

Alleviating Over-segmentation Errors by Detecting Action Boundaries Forked from ASRF offical code. This repo is the a implementation of replacing orig

13 Dec 12, 2022
A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images

BaSiC Matlab code accompanying A BaSiC Tool for Background and Shading Correction of Optical Microscopy Images by Tingying Peng, Kurt Thorn, Timm Schr

Marr Lab 34 Dec 18, 2022