FluidNet re-written with ATen tensor lib

Overview

fluidnet_cxx: Accelerating Fluid Simulation with Convolutional Neural Networks. A PyTorch/ATen Implementation.

This repository is based on the paper, Accelerating Eulerian Fluid Simulation With Convolutional Networks by Jonathan Tompson, Kristofer Schlachter, Pablo Sprechmann, Ken Perlin on the accelation of fluid simulations by embedding a neural network in an existing solver for pressure prediction, replacing an expensive pressure projection linked to a Poisson equation on the pressure, which is usually solved with iterative methods (PCG or Jacobi methods). We implemented our code with PyTorch, effectively replacing all the original Torch/Lua and C++/CUDA implementation of the inviscid, incompressible fluid solver (based on the open-source fluid simulator Mantaflow, aimed at the Computer Graphics community). Find the original FluidNet repository here.

We have retaken the original FluidNet NN architecture and added different features, such as replacing upsampling with deconvolution layers, or directly replacing the complete architecture with a deeper MultiScale net which showed more accurate results at the expense of inference speed.

This work allows to compare both the code perfomace when run in a single GPU unit and the accuracy of this data-driven method in comparison with tradional mehtods (Jacobi) or other fluid simulation methods like Lattice Boltzmann Methods.

Results

Simulations of a buoyancy-driven plume flow are performed with different methods for the Poisson equation resolution. An inlet is placed at the bottom of the domain, where a lighter fluid (with density rho0) is injected with velocity v0 into a quiescent heavier fluid. Results show that some work is still needed to predict a correct plume growth rate, due probably to a poor modelling of buoyant forces by the trained model.

Alt text

Resolution with ConvNet | Jacobi Method 28 iter | Jacobi Method 100 iter

Growth Rate of the plume's head for Ri=0.14

Functionalities:

  • NOTE: For the moment, only 2D simulations and training are supported. 3D needs still some work.
  • Full eulerian (incompressible and inviscid) fluid simulator:
    • Momentum equation resolution using a splitting algorithm:
      • Advection of velocity + External forces
      • Enforcing of non-divergence of velocity constraint through Poisson equation resolution, resulting in a pressure gradient that corrects the velocity from the previous step. Step replaced by a fully convolutional Neural Network with divergence of velocity as input and pressure as output.
    • Unconditional Stable MacCormack discretization of velocity advection algorithm.
    • Jacobi method implementation for comparison.
  • Dataset:
    • Generation with FluidNet own Mantaflow sript.
    • Random insertion of objects and velocity emitters, as well as gravity forces.
    • Pre-processed into PyTorch objects
  • Pre-trained models:
  • Training:
    • Several options for loss function:
      • MSE of pressure
      • "Physical" loss: MSE of velocity divergence (unsupervised)
      • MSE of velocity divergence after several timesteps.
    • Short term divergence loss: 8 hours training
    • Short+Long term divergence loss: ~2 days
  • Inference. Two test cases:
    • Buoyant plume.
    • Rayleigh Taylor instability.
    • Launch your simulation with the available pre-trained model.
    • Comparison with Jacobi method resolution + LBM with open-sourced C++ library Palabos
  • Results visualization:
    • Matplotlib
    • Paraview post-processing tool (VTK files)

Models

Requirements

  • Python 3.X
  • C++11
  • Pytorch 0.4 (Including ATen Tensor library, exposing PyTorch library in C++)
  • FluidNet own Mantaflow implementation
  • PyVTK (pip install)
  • (Optional) Paraview
  • (Optional) OpenCV2

ATen allows to write generic code that works on both devices. More information in ATen repo. It can be called from PyTorch, using its new extension-cpp.

Installation

To install this repo:

  1. Clone this repo:
https://github.com/jolibrain/fluidnet_cxx.git
  1. Install Pytorch 0.4: Pytorch 0.4 NOTE: Training is done in GPUs

  2. Install cpp extensions for fluid solver: C++ scripts have been written using PyTorch's backend C++ library ATen. These scripts are used for the advection part of the solver. Follow these instructions from main directory:

cd pytorch/lib/fluid/cpp
python3 setup.py install # if you want to install it on local user, use --user

Training

Dataset We use the same 2D dataset as the original FluidNet Section 1: Generating the data - Generating training data (generated with MantaFlow) for training our ConvNet.

Running the training To train the model, go to pytorch folder:

cd pytorch

The dataset file structure should be located in <dataDir> folder with the following structure:

.
└── dataDir
    └── dataset
        ├── te
        └── tr

Precise the location of the dataset in pytorch/config.yaml writing the folder location at dataDir (use absolute paths). Precise also dataset (name of the dataset), and output folder modelDirwhere the trained model and loss logs will be stored and the model name modelFilename.

Run the training :

python3 fluid_net_train.py

For a given dataset, a pre-processing operation must be performed to save it as PyTorch objects, easily loaded when training. This is done automatically if no preprocessing log is detected. This process can take some time but it is necessary only once per dataset.

Training can be stopped using Ctrl+C and then resumed by running:

python3 fluid_net_train.py --resume

You can also monitor the loss during training by running in /pytorch

python3 plot_loss.py <modelDir> #For total training and validation losses
#or
python3 plot_5loss.py <modelDir> #For each of the losses (e.g: L1(div) and L2(div))

It is also possible to load the saved model and print its output fields and compare it to targets (Pressure, Velocity, Divergence and Errors):

python3 print_output.py <modelDir> <modelFilename>
#example:
python3 print_output.py data/model_pLoss_L2 convModel

Training options

You can set the following options for training from the terminal command line:

  • -h : displays help message
  • --trainingConf : YAML config file for training. Default = config.yaml.
  • --modelDir : Output folder location for trained model. When resuming, reads from this location.
  • --modelFilename : Model name.
  • --dataDir : Dataset location.
  • --resume : Resumes training from checkpoint in modelDir
  • --bsz : Batch size for training.
  • --maxEpochs : Maximum number training epochs.
  • --noShuffle : Remove dataset shuffle when training.
  • --lr : Learning rate.
  • --numWorkers : Number of parallel workers for dataset loading.
  • --outMode : Training debug options. Prints or shows validation dataset. save = saves plots to disk show = shows plots in window during training none = do nothing

The rest of the training parameters are set in the trainingConf file, by default config.yaml.

Parameters in the YAML config file are copied into a python dictionary and saved as two separated dictionaries in modelDir, one conf dictionary for parameters related to training (batch size, maximum number of epochs) and one mconf dictionary for parameters related to the model (inputs, losses, scaling options etc)

Test

Run the buoyant plume test case by running:

cd pytorch
python3 plume.py --modelDir <modelDir> --modelFilename <modelFilename> --outputFolder <outputFolder>

with:

  • <modelDir> : folder with trained model.
  • <modelFilename> : Trained model name.
  • <outputFolder> : Folder for saving simulation results.

You can also stop the simulation (Ctrl+C) and restart it afterwards:

python3 plume.py --restartSim

Test options

  • -h : displays help message
  • --simConf : YAML config file for simulation. Default = plumeConfig.yaml.
  • --trainingConf : YAML config file for training. Default = config.yaml.
  • --modelDir : Trained model location.
  • --modelFilename : Model name.
  • --outputFolder : Location of output results.
  • --restartSim : Restart simulation from checkpoint in <outputFolder>.

Check plumeConfig.yaml to see how the configuation file for the simulation is organized.

Modifying the NN architecture

If you want to try your own architecture, you only have to follow these simple rules:

  • Write your model in a separate script and save it inside pytorch/lib.
  • Open model.py and import your own script as a module. Go to class FluidNet here.
  • Ideally, as with the Multi-Scale Net example, you should just have to precise the number of channels from the input, and add your net forward pass as in the multicale example here

Extending the cpp code:

The cpp code, written with ATen library, can be compiled, tested and run on its own. You will need OpenCV2 to visualize output of the pressure and velocity fields, as matplotlib is unfortunately not available in cpp!

Test

First, generate the test data from FluidNet Section 3. Limitations of the current system - Unit Testing and write the location of your folder in:

solver_cpp/test/test_fluid.cpp
#define DATA <path_to_data>

Run the following commands:

cd solver_cpp/
mkdir build_test
cd build_test
cmake .. -DFLUID_TEST=ON # Default is OFF
./test/fluidnet_sim

This will test every routine of the solver (advection, divergence calculation, velocity update, adding of gravity and buoyancy, linear system resolution with Jacobi method). These tests are taken from FluidNet and compare outputs of Manta to ours, except for advection when there is no Manta equivalent. In that case, we compare to the original FluidNet advection.

Run

cd solver_cpp/
mkdir build
cd build
cmake .. -DFLUID_TEST=OFF # Default is OFF
./simulate/fluidnet_sim

Output images will be written in build folder, and can be converted into gif using ImageMagick.

NOTE: For the moment, only 2D simulations and training are supported, as bugs are still found for the 3D advection.

Owner
JoliBrain
Pretty AI for solving real world problems
JoliBrain
The source code of "SIDE: Center-based Stereo 3D Detector with Structure-aware Instance Depth Estimation", accepted to WACV 2022.

SIDE: Center-based Stereo 3D Detector with Structure-aware Instance Depth Estimation The source code of our work "SIDE: Center-based Stereo 3D Detecto

10 Dec 18, 2022
Generic template to bootstrap your PyTorch project with PyTorch Lightning, Hydra, W&B, and DVC.

NN Template Generic template to bootstrap your PyTorch project. Click on Use this Template and avoid writing boilerplate code for: PyTorch Lightning,

Luca Moschella 520 Dec 30, 2022
Semantic segmentation task for ADE20k & cityscapse dataset, based on several models.

semantic-segmentation-tensorflow This is a Tensorflow implementation of semantic segmentation models on MIT ADE20K scene parsing dataset and Cityscape

HsuanKung Yang 83 Oct 13, 2022
This demo showcase the use of onnxruntime-rs with a GPU on CUDA 11 to run Bert in a data pipeline with Rust.

Demo BERT ONNX pipeline written in rust This demo showcase the use of onnxruntime-rs with a GPU on CUDA 11 to run Bert in a data pipeline with Rust. R

Xavier Tao 14 Dec 17, 2022
(to be released) [NeurIPS'21] Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs

Higher-Order Transformers Kim J, Oh S, Hong S, Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs, NeurIPS 2021. [arxiv] W

Jinwoo Kim 44 Dec 28, 2022
The open-source and free to use Python package miseval was developed to establish a standardized medical image segmentation evaluation procedure

miseval: a metric library for Medical Image Segmentation EVALuation The open-source and free to use Python package miseval was developed to establish

59 Dec 10, 2022
Source code of our work: "Benchmarking Deep Models for Salient Object Detection"

SALOD Source code of our work: "Benchmarking Deep Models for Salient Object Detection". In this works, we propose a new benchmark for SALient Object D

22 Dec 30, 2022
A framework for joint super-resolution and image synthesis, without requiring real training data

SynthSR This repository contains code to train a Convolutional Neural Network (CNN) for Super-resolution (SR), or joint SR and data synthesis. The met

83 Jan 01, 2023
TJU Deep Learning & Neural Network

Deep_Learning & Neural_Network_Lab 实验环境 Python 3.9 Anaconda3(官网下载或清华镜像都行) PyTorch 1.10.1(安装代码如下) conda install pytorch torchvision torchaudio cudatool

St3ve Lee 1 Jan 19, 2022
This is Official implementation for "Pose-guided Feature Disentangling for Occluded Person Re-Identification Based on Transformer" in AAAI2022

PFD:Pose-guided Feature Disentangling for Occluded Person Re-identification based on Transformer This repo is the official implementation of "Pose-gui

Tao Wang 93 Dec 18, 2022
code for `Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation`

Look Closer to Segment Better: Boundary Patch Refinement for Instance Segmentation (CVPR 2021) Introduction PBR is a conceptually simple yet effective

H.Chen 143 Jan 05, 2023
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

ood-text-emnlp Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them" Files fine_tune.py is used to finetune the GPT-2 mo

Udit Arora 19 Oct 28, 2022
基于PaddleClas实现垃圾分类,并转换为inference格式用PaddleHub服务端部署

百度网盘链接及提取码: 链接:https://pan.baidu.com/s/1HKpgakNx1hNlOuZJuW6T1w 提取码:wylx 一个垃圾分类项目带你玩转飞桨多个产品(1) 基于PaddleClas实现垃圾分类,导出inference模型并利用PaddleHub Serving进行服务

thomas-yanxin 22 Jul 12, 2022
Pytorch implementation for "Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets" (ECCV 2020 Spotlight)

Distribution-Balanced Loss [Paper] The implementation of our paper Distribution-Balanced Loss for Multi-Label Classification in Long-Tailed Datasets (

Tong WU 304 Dec 22, 2022
EfficientMPC - Efficient Model Predictive Control Implementation

efficientMPC Efficient Model Predictive Control Implementation The original algo

Vin 8 Dec 04, 2022
(NeurIPS '21 Spotlight) IQ-Learn: Inverse Q-Learning for Imitation

Inverse Q-Learning (IQ-Learn) Official code base for IQ-Learn: Inverse soft-Q Learning for Imitation, NeurIPS '21 Spotlight IQ-Learn is an easy-to-use

Divyansh Garg 102 Dec 20, 2022
Content shared at DS-OX Meetup

Streamlit-Projects Streamlit projects available in this repo: An introduction to Streamlit presented at DS-OX (Feb 26, 2020) meetup Streamlit 101 - Ja

Arvindra 69 Dec 23, 2022
Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch

Omninet - Pytorch Implementation of OmniNet, Omnidirectional Representations from Transformers, in Pytorch. The authors propose that we should be atte

Phil Wang 48 Nov 21, 2022
Modified fork of Xuebin Qin's U-2-Net Repository. Used for demonstration purposes.

U^2-Net (U square net) Modified version of U2Net used for demonstation purposes. Paper: U^2-Net: Going Deeper with Nested U-Structure for Salient Obje

Shreyas Bhat Kera 13 Aug 28, 2022
Simple streamlit app to demonstrate HERE Tour Planning

Table of Contents About the Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing License Acknowledgements About Th

Amol 8 Sep 05, 2022