Reproduces the results of the paper "Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations".

Overview

Finite basis physics-informed neural networks (FBPINNs)


This repository reproduces the results of the paper Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations, B. Moseley, T. Nissen-Meyer and A. Markham, Jul 2021 ArXiv.


Key contributions

  • Physics-informed neural networks (PINNs) offer a powerful new paradigm for solving problems relating to differential equations
  • However, a key limitation is that PINNs struggle to scale to problems with large domains and/or multi-scale solutions
  • We present finite basis physics-informed neural networks (FBPINNs), which are able to scale to these problems
  • To do so, FBPINNs use a combination of domain decomposition, subdomain normalisation and flexible training schedules
  • FBPINNs outperform PINNs in terms of accuracy and computational resources required

Workflow

FBPINNs divide the problem domain into many small, overlapping subdomains. A neural network is placed within each subdomain such that within the center of the subdomain, the network learns the full solution, whilst in the overlapping regions, the solution is defined as the sum over all overlapping networks.

We use smooth, differentiable window functions to locally confine each network to its subdomain, and the inputs of each network are individually normalised over the subdomain.

In comparison to existing domain decomposition techniques, FBPINNs do not require additional interface terms in their loss function, and they ensure the solution is continuous across subdomain interfaces by the construction of their solution ansatz.

Installation

FBPINNs only requires Python libraries to run.

We recommend setting up a new environment, for example:

conda create -n fbpinns python=3  # Use conda package manager
conda activate fbpinns

and then installing the following libraries:

conda install scipy matplotlib jupyter
conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch
pip install tensorboardX

All of our work was completed using PyTorch version 1.8.1 with CUDA 10.2.

Finally, download the source code:

git clone https://github.com/benmoseley/FBPINNs.git

Getting started

The workflow to train and compare FBPINNs and PINNs is very simple to set up, and consists of three steps:

  1. Initialise a problems.Problem class, which defines the differential equation (and boundary condition) you want to solve
  2. Initialise a constants.Constants object, which defines all of the other training hyperparameters (domain, number of subdomains, training schedule, .. etc)
  3. Pass this Constants object to the main.FBPINNTrainer or main.PINNTrainer class and call the .train() method to start training.

For example, to solve the problem du/dx = cos(wx) shown above you can use the following code to train a FBPINN / PINN:

P = problems.Cos1D_1(w=1, A=0)# initialise problem class

c1 = constants.Constants(
            RUN="FBPINN_%s"%(P.name),# run name
            P=P,# problem class
            SUBDOMAIN_XS=[np.linspace(-2*np.pi,2*np.pi,5)],# defines subdomains
            SUBDOMAIN_WS=[2*np.ones(5)],# defines width of overlapping regions between subdomains
            BOUNDARY_N=(1/P.w,),# optional arguments passed to the constraining operator
            Y_N=(0,1/P.w,),# defines unnormalisation
            ACTIVE_SCHEDULER=active_schedulers.AllActiveSchedulerND,# training scheduler
            ACTIVE_SCHEDULER_ARGS=(),# training scheduler arguments
            N_HIDDEN=16,# number of hidden units in subdomain network
            N_LAYERS=2,# number of hidden layers in subdomain network
            BATCH_SIZE=(200,),# number of training points
            N_STEPS=5000,# number of training steps
            BATCH_SIZE_TEST=(400,),# number of testing points
            )

run = main.FBPINNTrainer(c1)# train FBPINN
run.train()

c2 = constants.Constants(
            RUN="PINN_%s"%(P.name),
            P=P,
            SUBDOMAIN_XS=[np.linspace(-2*np.pi,2*np.pi,5)],
            BOUNDARY_N=(1/P.w,),
            Y_N=(0,1/P.w,),
            N_HIDDEN=32,
            N_LAYERS=3,
            BATCH_SIZE=(200,),
            N_STEPS=5000,
            BATCH_SIZE_TEST=(400,),
            )

run = main.PINNTrainer(c2)# train PINN
run.train()

The training code will automatically start outputting training statistics, plots and tensorboard summaries. The tensorboard summaries can be viewed by installing tensorboard and then running the command line tensorboard --logdir fbpinns/results/summaries/.

Defining your own problem.Problem class

To learn how to define and solve your own problem, see the Defining your own problem Jupyter notebook here.

Reproducing our results

The purpose of each folder is as follows:

  • fbpinns : contains the main code which defines and trains FBPINNs.
  • analytical_solutions : contains a copy of the BURGERS_SOLUTION code used to compute the exact solution to the Burgers equation problem.
  • seismic-cpml : contains a Python implementation of the SEISMIC_CPML FD library used to solve the wave equation problem.
  • shared_modules : contains generic Python helper functions and classes.

To reproduce the results in the paper, use the following steps:

  1. Run the scripts fbpinns/paper_main_1D.py, fbpinns/paper_main_2D.py, fbpinns/paper_main_3D.py. These train and save all of the FBPINNs and PINNs presented in the paper.
  2. Run the notebook fbpinns/Paper plots.ipynb. This generates all of the plots in the paper.

Further questions?

Please raise a GitHub issue or feel free to contact us.

Owner
Ben Moseley
Physics + AI researcher at University of Oxford, ML lead at NASA Frontier Development Lab
Ben Moseley
Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.

Conceptual 12M We introduce the Conceptual 12M (CC12M), a dataset with ~12 million image-text pairs meant to be used for vision-and-language pre-train

Google Research Datasets 226 Dec 07, 2022
Collection of TensorFlow2 implementations of Generative Adversarial Network varieties presented in research papers.

TensorFlow2-GAN Collection of tf2.0 implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will

41 Apr 28, 2022
A collection of differentiable SVD methods and also the official implementation of the ICCV21 paper "Why Approximate Matrix Square Root Outperforms Accurate SVD in Global Covariance Pooling?"

Differentiable SVD Introduction This repository contains: The official Pytorch implementation of ICCV21 paper Why Approximate Matrix Square Root Outpe

YueSong 32 Dec 25, 2022
This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit

BMW Semantic Segmentation GPU/CPU Inference API This is a repository for a Semantic Segmentation inference API using the Gluoncv CV toolkit. The train

BMW TechOffice MUNICH 56 Nov 24, 2022
“英特尔创新大师杯”深度学习挑战赛 赛道3:CCKS2021中文NLP地址相关性任务

基于 bert4keras 的一个baseline 不作任何 数据trick 单模 线上 最高可到 0.7891 # 基础 版 train.py 0.7769 # transformer 各层 cls concat 明神的trick https://xv44586.git

孙永松 7 Dec 28, 2021
CC-GENERATOR - A python script for generating CC

CC-GENERATOR A python script for generating CC NOTE: This tool is for Educationa

Lêkzï 6 Oct 14, 2022
The code for 'Deep Residual Fourier Transformation for Single Image Deblurring'

Deep Residual Fourier Transformation for Single Image Deblurring Xintian Mao, Yiming Liu, Wei Shen, Qingli Li and Yan Wang code will be released soon

145 Dec 13, 2022
YouRefIt: Embodied Reference Understanding with Language and Gesture

YouRefIt: Embodied Reference Understanding with Language and Gesture YouRefIt: Embodied Reference Understanding with Language and Gesture by Yixin Che

16 Jul 11, 2022
PyTorch implementations of algorithms for density estimation

pytorch-flows A PyTorch implementations of Masked Autoregressive Flow and some other invertible transformations from Glow: Generative Flow with Invert

Ilya Kostrikov 546 Dec 05, 2022
PyTorch code for: Learning to Generate Grounded Visual Captions without Localization Supervision

Learning to Generate Grounded Visual Captions without Localization Supervision This is the PyTorch implementation of our paper: Learning to Generate G

Chih-Yao Ma 41 Nov 17, 2022
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

Monitor deep learning model training and hardware usage from mobile. 🔥 Features Monitor running experiments from mobile phone (or laptop) Monitor har

labml.ai 1.2k Dec 25, 2022
Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration

Speckle-free Holography with Partially Coherent Light Sources and Camera-in-the-loop Calibration Project Page | Paper Yifan Peng*, Suyeon Choi*, Jongh

Stanford Computational Imaging Lab 19 Dec 11, 2022
Source code of generalized shuffled linear regression

Generalized-Shuffled-Linear-Regression Code for the ICCV 2021 paper: Generalized Shuffled Linear Regression. Authors: Feiran Li, Kent Fujiwara, Fumio

FEI 7 Oct 26, 2022
wmctrl ported to Python Ctypes

work in progress wmctrl is a command that can be used to interact with an X Window manager that is compatible with the EWMH/NetWM specification. wmctr

Iyad Ahmed 22 Dec 31, 2022
Conformer: Local Features Coupling Global Representations for Visual Recognition

Conformer: Local Features Coupling Global Representations for Visual Recognition (arxiv) This repository is built upon DeiT and timm Usage First, inst

Zhiliang Peng 378 Jan 08, 2023
The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021)

The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021) Arash Vahdat*   ·   Karsten Kreis*   ·  

NVIDIA Research Projects 238 Jan 02, 2023
Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning.

Collection of tasks for fast prototyping, baselining, finetuning and solving problems with deep learning Installation

Pytorch Lightning 1.6k Jan 08, 2023
Local Multi-Head Channel Self-Attention for FER2013

LHC-Net Local Multi-Head Channel Self-Attention This repository is intended to provide a quick implementation of the LHC-Net and to replicate the resu

12 Jan 04, 2023
Continual World is a benchmark for continual reinforcement learning

Continual World Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld. Th

41 Dec 24, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

[CVPRW 2021] - Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation

Anirudh S Chakravarthy 6 May 03, 2022