A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration.

Overview
docs/_static/final_logo.png

A python package simulating the quasi-2D pseudospin-1/2 Gross-Pitaevskii equation with NVIDIA GPU acceleration.

Introduction

spinor-gpe is high-level, object-oriented Python package for numerically solving the quasi-2D, psuedospinor (two component) Gross-Piteavskii equation (GPE), for both ground state solutions and real-time dynamics. This project grew out of a desire to make high-performance simulations of the GPE more accessible to the entering researcher.

While this package is primarily built on NumPy, the main computational heavy-lifting is performed using PyTorch, a deep neural network library commonly used in machine learning applications. PyTorch has a NumPy-like interface, but a backend that can run either on a conventional processor or a CUDA-enabled NVIDIA(R) graphics card. Accessing a CUDA device will provide a significant hardware acceleration of the simulations.

This package has been tested on Windows, Mac, and Linux systems.

View the documentation on ReadTheDocs

Installation

Dependencies

Primary packages:

  1. PyTorch >= 1.8.0
  2. cudatoolkit >= 11.1
  3. NumPy

Other packages:

  1. matplotlib (visualizing results)
  2. tqdm (progress messages)
  3. scikit-image (matrix signal processing)
  4. ffmpeg = 4.3.1 (animation generation)

Installing Dependencies

The dependencies for spinor-gpe can be installed directly into the new conda virtual environment spinor using the environment.yml file included with the package:

conda env create --file environment.yml

This installation may take a while.

Note

The version of CUDA used in this package does not support macOS. Users on these computers may still install PyTorch and run the examples on their CPU. To install correctly on macOS, remove the - cudatoolkit=11.1 line from the environment.yml file. After installation, you will need to modify the example code to run on the cpu device instead of the cuda device.

The above dependencies can also be installed manually using conda into a virtual environment:

conda activate <new_virt_env_name>
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
conda install numpy matplotlib tqdm scikit-image ffmpeg spyder

Note

For more information on installing PyTorch, see its installation instructions page.

To verify that Pytorch was installed correctly, you should be able to import it:

>>> import torch
>>> x = torch.rand(5, 3)
>>> print(x)
tensor([[0.2757, 0.3957, 0.9074],
        [0.6304, 0.1279, 0.7565],
        [0.0946, 0.7667, 0.2934],
        [0.9395, 0.4782, 0.9530],
        [0.2400, 0.0020, 0.9569]])

Also, if you have an NVIDIA GPU, you can test that it is available for GPU computing:

>>> torch.cuda.is_available()
True

CUDA Installation

CUDA is the API that interfaces with the computing resources on NVIDIA graphics cards, and it can be accessed through the PyTorch package. If your computer has an NVIDIA graphics card, start by verifying that it is CUDA-compatible. This page lists out the compute capability of many NVIDIA devices. (Note: yours may still be CUDA-compatible even if it is not listed here.)

Given that your graphics card can run CUDA, the following are the steps to install CUDA on a Windows computer:

  1. Install the NVIDIA CUDA Toolkit. Go to the CUDA download page for the most recent version. Select the operating system options and installer type. Download the installer and install it via the wizard on the screen. This may take a while. For reference, here is the Windows CUDA Toolkit installation guide.

    To test that CUDA is installed, run which nvcc, and, if instlled correctly, will return the installation path. Also run nvcc --version to verify that the version of CUDA matches the PyTorch CUDA toolkit version (>=11.1).

  2. Download the correct drivers for your NVIDIA device. Once the driver is installed, you will have the NVIDIA Control Panel installed on your computer.

Getting Started

  1. Clone the repository.
  2. Navigate to the spinor_gpe/examples/ directory, and start to experiment with the examples there.

Basic Operation

This package has a simple, object-oriented interface for imaginary- and real-time propagations of the pseudospinor-GPE. While there are other operations and features to this package, all simulations will have the following basic structure:

1. Setup: Data path and PSpinor object

>>> import pspinor as spin
>>> DATA_PATH = '<project_name>/Trial_###'
>>> ps = spin.PSpinor(DATA_PATH)

The program will create a new directory DATA_PATH, in which the data and results from this simulation trial will be saved. If DATA_PATH is a relative path, as shown above, then the trial data will be located in the /data/ folder. When working with multiple simulation projects, it can be helpful to specify a <project_name> directory; furthermore, the form Trial_### is convenient, but not strictly required.

2. Run: Begin Propagation

The example below demonstrates imaginary-time propagation. The method PSpinor.imaginary performs the propagation loop and returns a PropResult object. This object contains the results, including the final wavefunctions and populations, and analysis and plotting methods (described below).

>>> DT = 1/50
>>> N_STEPS = 1000
>>> DEVICE = 'cuda'
>>> res = ps.imaginary(DT, N_STEPS, DEVICE, is_sampling=True, n_samples=50)

For real-time propagation, use the method PSpinor.real.

3. Analyze: Plot the results

PropResult provides several methods for viewing and understanding the final results. The code block below demonstrates several of them:

>>> res.plot_spins()  # Plots the spin-dependent densities and phases.
>>> res.plot_total()  # Plots the total densities and phases.
>>> res.plot_pops()   # Plots the spin populations throughout the propagation.
>>> res.make_movie()  # Generates a movie from the sampled wavefunctions.

Note that PSpinor also exposes methods to plot the spin and total densities. These can be used independent of PropResult:

>>> ps.plot_spins()

4. Repeat

Likely you will want to repeat or chain together different segments of this structure. Demonstrations of this are shown in the Examples gallery.

Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
Reproduce ResNet-v2(Identity Mappings in Deep Residual Networks) with MXNet

Reproduce ResNet-v2 using MXNet Requirements Install MXNet on a machine with CUDA GPU, and it's better also installed with cuDNN v5 Please fix the ran

Wei Wu 531 Dec 04, 2022
NAS-HPO-Bench-II is the first benchmark dataset for joint optimization of CNN and training HPs.

NAS-HPO-Bench-II API Overview NAS-HPO-Bench-II is the first benchmark dataset for joint optimization of CNN and training HPs. It helps a fair and low-

yoichi hirose 8 Nov 21, 2022
A little software to generate and save Julia or Mandelbrot's Fractals.

Julia-Mandelbrot-s-Fractals A little software to generate and save Julia or Mandelbrot's Fractals. Dependencies : Python 3.7 or more. (Also possible t

Olivier 0 Jul 09, 2022
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

203 Dec 30, 2022
Tensorflow implementation of soft-attention mechanism for video caption generation.

SA-tensorflow Tensorflow implementation of soft-attention mechanism for video caption generation. An example of soft-attention mechanism. The attentio

Paul Chen 153 Nov 14, 2022
[NeurIPS 2020] Official Implementation: "SMYRF: Efficient Attention using Asymmetric Clustering".

SMYRF: Efficient attention using asymmetric clustering Get started: Abstract We propose a novel type of balanced clustering algorithm to approximate a

Giannis Daras 46 Dec 22, 2022
Code to accompany our paper "Continual Learning Through Synaptic Intelligence" ICML 2017

Continual Learning Through Synaptic Intelligence This repository contains code to reproduce the key findings of our path integral approach to prevent

Ganguli Lab 82 Nov 03, 2022
PyTorch implementation for "HyperSPNs: Compact and Expressive Probabilistic Circuits", NeurIPS 2021

HyperSPN This repository contains code for the paper: HyperSPNs: Compact and Expressive Probabilistic Circuits "HyperSPNs: Compact and Expressive Prob

8 Nov 08, 2022
This repository includes the code of the sequence-to-sequence model for discontinuous constituent parsing described in paper Discontinuous Grammar as a Foreign Language.

Discontinuous Grammar as a Foreign Language This repository includes the code of the sequence-to-sequence model for discontinuous constituent parsing

Daniel Fernández-González 2 Apr 07, 2022
This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised and Tiny ML scenarios"

TinyWeaklyIsolationForest This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised a

2 Mar 21, 2022
🚗 INGI Dakar 2K21 - Be the first one on the finish line ! 🚗

🚗 INGI Dakar 2K21 - Be the first one on the finish line ! 🚗 This year's first semester Club Info challenge will put you at the head of a car racing

ClubINFO INGI (UCLouvain) 6 Dec 10, 2021
Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch]

Ensemble Learning Priors Driven Deep Unfolding for Scalable Snapshot Compressive Imaging [PyTorch] Abstract Snapshot compressive imaging (SCI) can rec

integirty 6 Nov 01, 2022
Dist2Dec: A Simplicial Neural Network for Homology Localization

Dist2Dec: A Simplicial Neural Network for Homology Localization

Alexandros Keros 6 Jun 12, 2022
PyTorch implementation of ENet

PyTorch-ENet PyTorch (v1.1.0) implementation of ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation, ported from the lua-torc

David Silva 333 Dec 29, 2022
This is a Python Module For Encryption, Hashing And Other stuff

EnroCrypt This is a Python Module For Encryption, Hashing And Other Basic Stuff You Need, With Secure Encryption And Strong Salted Hashing You Can Do

5 Sep 15, 2022
Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut

You Only Cut Once (YOCO) YOCO is a simple method/strategy of performing augmenta

88 Dec 28, 2022
A Unified Framework and Analysis for Structured Knowledge Grounding

UnifiedSKG 📚 : Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models Code for paper UnifiedSKG: Unifying and Mu

HKU NLP Group 370 Dec 21, 2022
Skipgram Negative Sampling in PyTorch

PyTorch SGNS Word2Vec's SkipGramNegativeSampling in Python. Yet another but quite general negative sampling loss implemented in PyTorch. It can be use

Jamie J. Seol 287 Dec 14, 2022
Official code of the paper "Expanding Low-Density Latent Regions for Open-Set Object Detection" (CVPR 2022)

OpenDet Expanding Low-Density Latent Regions for Open-Set Object Detection (CVPR2022) Jiaming Han, Yuqiang Ren, Jian Ding, Xingjia Pan, Ke Yan, Gui-So

csuhan 64 Jan 07, 2023