DiSECt: Differentiable Simulator for Robotic Cutting

Related tags

Deep LearningDiSECt
Overview

DiSECt: Differentiable Simulator for Robotic Cutting

Website | Paper | Dataset | Video | Blog post

Potato slicing

DiSECt is a simulator for the cutting of deformable materials. It uses the Finite Element Method (FEM) to simulate the deformation of the material, and leverages a virtual node algorithm to introduce springs between the two halves of the mesh being cut. These cutting springs are weakened in proportion to the knife forces acting on the material, yielding a continuous model of deformation and crack propagation. By leveraging source code transformation, the back-end of DiSECt automatically generates CUDA-accelerated kernels for the forward simulation and the gradients of the simulation inputs. Such gradient information can be used to optimize the simulation parameters to achieve accurate knife force predictions, optimize cutting actions, and more.

Prerequisites

  • Python 3.6 or higher
  • PyTorch 1.4.0 or higher
  • Pixar USD lib (for visualization)

Pre-built USD Python libraries can be downloaded from https://developer.nvidia.com/usd, once they are downloaded you should follow the instructions to add them to your PYTHONPATH environment variable. Besides using the provided basic visualizer implemented using pyvista, DiSECt can generate USD files for rendering, e.g. in NVIDIA Omniverse™ or usdview.

Using the built-in backend

By default, the simulation back-end uses the built-in PyTorch cpp-extensions mechanism to compile auto-generated simulation kernels.

  • Windows users should ensure they have Visual Studio 2019 installed

Installation

Dataset

To set up our dataset of meshes, simulated knife forces and nodal motion fields we recorded in the ANSYS LS-DYNA simulator, download this zip file (96 MB) and extract it in the project folder, such that the folder dataset is at the top level.

We provide a README.md file with more details on the contents of this dataset in the dataset folder. The dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International License.

Python dependencies

Next, set up the Python dependencies listed in requirements.txt via

pip install -r requirements.txt

Mesh processing library

See meshing/README.md for instructions on how to install the recommended C++-based mesh cutting library that DiSECt relies on to process meshes.

Mesh discretization

For the mesh discretization we provide an example script in cutting/tetrahedralization.py based on the Wildmeshing Python API that can be used to generate a tetrahedral mesh from a triangle surface mesh, which allows it to be used in the FEM simulator.

Examples

The following demos are provided and can be executed via python examples/ .py .

Example Description
basic_cutting Cutting a prism shape with a knife following a slicing motion, running in the interactive pyvista 3D visualizer
render_usd Demonstrates how to generate a USD file from the simulation
optimize_slicing Constrained optimization via MDMM to find a slicing motion of the knife that minimizes force while adhering to blade length and knife height constraints
parameter_inference Optimizes simulation parameters to match a knife force profile from one of the measurements in our dataset

Citation

@INPROCEEDINGS{heiden2021disect,
    AUTHOR    = {Eric Heiden AND Miles Macklin AND Yashraj S Narang AND Dieter Fox AND Animesh Garg AND Fabio Ramos},
    TITLE     = {{DiSECt: A Differentiable Simulation Engine for Autonomous Robotic Cutting}},
    BOOKTITLE = {Proceedings of Robotics: Science and Systems},
    YEAR      = {2021},
    ADDRESS   = {Virtual},
    MONTH     = {July},
    DOI       = {10.15607/RSS.2021.XVII.067}
}

License

Copyright © 2021, NVIDIA Corporation. All rights reserved.

This work is made available under the NVIDIA Source Code License.

You might also like...
Get a Grip! - A robotic system for remote clinical environments.
Get a Grip! - A robotic system for remote clinical environments.

Get a Grip! Within clinical environments, sterilization is an essential procedure for disinfecting surgical and medical instruments. For our engineeri

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Axel - 3D printed robotic hands and they controll with Raspberry Pi and Arduino combo

Axel It's our graduation project about 3D printed robotic hands and they control

A robotic arm that mimics hand movement through MediaPipe tracking.

La-Z-Arm A robotic arm that mimics hand movement through MediaPipe tracking. Hardware NVidia Jetson Nano Sparkfun Pi Servo Shield Micro Servos Webcam

Building Ellee — A GPT-3 and Computer Vision Powered Talking Robotic Teddy Bear With Human Level Conversation Intelligence

Using an object detection and facial recognition system built on MobileNetSSDV2 and Dlib and running on an NVIDIA Jetson Nano, a GPT-3 model, Google Speech Recognition, Amazon Polly and servo motors, I built Ellee - a robotic teddy bear who can move her head and converse naturally.

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors
TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors

TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution Vision-based Tactile Sensors This package provides a simulator for vision-based

A data-driven maritime port simulator
A data-driven maritime port simulator

PySeidon - A Data-Driven Maritime Port Simulator 🌊 Extendable and modular software for maritime port simulation. This software uses entity-component

A TensorFlow implementation of SOFA, the Simulator for OFfline LeArning and evaluation.
A TensorFlow implementation of SOFA, the Simulator for OFfline LeArning and evaluation.

SOFA This repository is the implementation of SOFA, the Simulator for OFfline leArning and evaluation. Keeping Dataset Biases out of the Simulation: A

Customizable RecSys Simulator for OpenAI Gym
Customizable RecSys Simulator for OpenAI Gym

gym-recsys: Customizable RecSys Simulator for OpenAI Gym Installation | How to use | Examples | Citation This package describes an OpenAI Gym interfac

Comments
  • Error when reproduce demo on DiSECt

    Error when reproduce demo on DiSECt

    Hi, Heiden Thank you for sharing such awesome work of DiSECt, where the codebase is solid and cool. But I meet some bugs when try to python examples/basic_cutting.py

    The following is my bug info. It seems that cannot find kernels.so ` python examples/basic_cutting.py Rebuilding kernels /home/anabur/anaconda3/envs/robot/lib/python3.6/site-packages/torch/utils/cpp_extension.py:298: UserWarning:

                               !! WARNING !!
    

    !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Your compiler (clang++-9) is not compatible with the compiler Pytorch was built with for this platform, which is g++ on linux. Please use g++ to to compile your extension. Alternatively, you may compile PyTorch from source using clang++-9, and then you can also use clang++-9 to compile your extension.

    See https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md for help with compiling PyTorch from source. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

                              !! WARNING !!
    

    platform=sys.platform)) Detected CUDA files, patching ldflags Emitting ninja build file /media/anabur/E/robot_similation/DiSECt/dflex/kernels/build.ninja... Building extension module kernels... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) 1.10.2.git.kitware.jobserver-1 Loading extension module kernels... Traceback (most recent call last): File "examples/basic_cutting.py", line 20, in from cutting import load_settings, SlicingMotion, CuttingSim File "/media/anabur/E/robot_similation/DiSECt/cutting/init.py", line 10, in from .cutting_sim import * File "/media/anabur/E/robot_similation/DiSECt/cutting/cutting_sim.py", line 25, in from cutting.urdf_loader import load_urdf File "/media/anabur/E/robot_similation/DiSECt/cutting/urdf_loader.py", line 12, in import dflex as df File "/media/anabur/E/robot_similation/DiSECt/dflex/init.py", line 15, in kernel_init() File "/media/anabur/E/robot_similation/DiSECt/dflex/sim.py", line 47, in kernel_init kernels = df.compile() File "/media/anabur/E/robot_similation/DiSECt/dflex/adjoint.py", line 1934, in compile with_pytorch_error_handling=False) File "/home/anabur/anaconda3/envs/robot/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1285, in load_inline keep_intermediates=keep_intermediates) File "/home/anabur/anaconda3/envs/robot/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1362, in _jit_compile return _import_module_from_library(name, build_directory, is_python_module) File "/home/anabur/anaconda3/envs/robot/lib/python3.6/site-packages/torch/utils/cpp_extension.py", line 1752, in _import_module_from_library module = importlib.util.module_from_spec(spec) ImportError: /media/anabur/E/robot_similation/DiSECt/dflex/kernels/kernels.so: cannot open shared object file: No such file or directory `

    opened by ANABUR920 2
  • Error on `examples/render_usd.py` with leaf `Variable` and gradients

    Error on `examples/render_usd.py` with leaf `Variable` and gradients

    I have installed DiSECt on my system:

    • Ubuntu 18.04
    • NVIDIA GeForce RTX 3090 GPU
    • Conda environment with Python 3.7 (find the conda list here)
    • nvcc --version gives me CUDA 11.2.
    • The dataset/ folder is located in the home directory.

    Three of the example scripts seem to be running without errors or warnings.

    python examples/basic_cutting.py
    python examples/optimize_slicing.py
    python examples/parameter_inference.py
    

    The exception is this fourth example:

    (disect) [email protected]:~/DiSECt (main) $ python examples/render_usd.py 
    Using cached kernels
    Using log folder at "/home/seita/DiSECt/log".
    Converted Young's modulus 43000.0 and Poisson's ratio 0.49 to Lame parameters mu = 14429.530201342282 and lambda = 707046.9798657711
    PyANSYS MAPDL Result file object
    Title       : Cutting_v5--Static Structural (B5)
    Units       : User Defined
    Version     : 20.2
    Cyclic      : False
    Result Sets : 1
    Nodes       : 797
    Elements    : 3562
    
    
    Available Results:
    ENS : Nodal stresses
    ENG : Element energies and volume
    EEL : Nodal elastic strains
    EUL : Element euler angles
    EPT : Nodal temperatures
    NSL : Nodal displacements
    RF  : Nodal reaction forces
    
    ANSYS Mesh
      Number of Nodes:              797
      Number of Elements:           3562
      Number of Element Types:      2
      Number of Node Components:    1
      Number of Element Components: 0
    
    Loaded mesh with 797 vertices and 3472 tets.
    Creating free-floating knife
    cut_meshing_cpp took 2.58 ms
    224 cut springs have been inserted.
    /home/seita/DiSECt/dflex/model.py:2223: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at  /opt/conda/conda-bld/pytorch_1634272178570/work/torch/csrc/utils/tensor_new.cpp:201.)
      m.shape_transform = torch.tensor(transform_flatten_list(self.shape_transform), dtype=torch.float32, device=adapter)
    self.cut_edge_indices: (448, 2)
    self.cut_spring_indices: (224, 2)
    self.cut_virtual_tri_indices: (790, 3)
    self.cut_edge_indices: (448, 2)
    self.cut_spring_indices: (224, 2)
    self.cut_virtual_tri_indices: (790, 3)
    render_demo:   0%|                                                                             | 0/40 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "examples/render_usd.py", line 54, in <module>
        sim.simulate(render=True)
      File "/home/seita/DiSECt/cutting/cutting_sim.py", line 708, in simulate
        self.simulation_step()
      File "/home/seita/DiSECt/cutting/cutting_sim.py", line 650, in simulation_step
        update_mass_matrix=False)
      File "/home/seita/DiSECt/dflex/sim.py", line 2912, in forward
        state_in.joint_qdd.zero_()
    RuntimeError: a leaf Variable that requires grad is being used in an in-place operation.
    (disect) [email protected]:~/DiSECt (main) $ 
    

    This seems to be a PyTorch error (e.g., https://discuss.pytorch.org/t/leaf-variable-was-used-in-an-inplace-operation/308) but are we supposed to have other information stored or loaded?

    opened by DanielTakeshi 1
  • minor installation clarifications / tweaks

    minor installation clarifications / tweaks

    Hi @eric-heiden

    I added in a minor installation change. I was running a Python 3.7 conda env on Ubuntu 18, and was running your basic example after the installation steps (including pip install -r requirements.txt):

    (disect) [email protected]:~/DiSECt (main) $ python examples/basic_cutting.py 
    Rebuilding kernels
    Traceback (most recent call last):
      File "examples/basic_cutting.py", line 20, in <module>
        from cutting import load_settings, SlicingMotion, CuttingSim
      File "/home/seita/DiSECt/cutting/__init__.py", line 10, in <module>
        from .cutting_sim import *
      File "/home/seita/DiSECt/cutting/cutting_sim.py", line 25, in <module>
        from cutting.urdf_loader import load_urdf
      File "/home/seita/DiSECt/cutting/urdf_loader.py", line 12, in <module>
        import dflex as df
      File "/home/seita/DiSECt/dflex/__init__.py", line 15, in <module>
        kernel_init()
      File "/home/seita/DiSECt/dflex/sim.py", line 47, in kernel_init
        kernels = df.compile()
      File "/home/seita/DiSECt/dflex/adjoint.py", line 1934, in compile
        with_pytorch_error_handling=False)
      File "/home/seita/miniconda3/envs/disect/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1285, in load_inline
        keep_intermediates=keep_intermediates)
      File "/home/seita/miniconda3/envs/disect/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1347, in _jit_compile
        is_standalone=is_standalone)
      File "/home/seita/miniconda3/envs/disect/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1418, in _write_ninja_file_and_build_library
        verify_ninja_availability()
      File "/home/seita/miniconda3/envs/disect/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1474, in verify_ninja_availability
        raise RuntimeError("Ninja is required to load C++ extensions")
    RuntimeError: Ninja is required to load C++ extensions
    (disect) [email protected]:~/DiSECt (main) $
    

    The fix is to do a simple pip install ninja. I've put this in the requirements.txt. (If it would help, I can also write a more detailed overview of how I installed this to make it reproducible in case you don't experience this on your end.)

    I've also put a slight clarification into where the instructions are for installing USD libraries in README.md. It might not be as clear on a first glance.

    opened by DanielTakeshi 1
Releases(v1.1)
Owner
NVIDIA Research Projects
NVIDIA Research Projects
Code and data for paper "Deep Photo Style Transfer"

deep-photo-styletransfer Code and data for paper "Deep Photo Style Transfer" Disclaimer This software is published for academic and non-commercial use

Fujun Luan 9.9k Dec 29, 2022
Code, pre-trained models and saliency results for the paper "Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images".

Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB This repository is the official implementation of the paper. Our results comming soon in

Xiaoqiang Wang 8 May 22, 2022
RARA: Zero-shot Sim2Real Visual Navigation with Following Foreground Cues

RARA: Zero-shot Sim2Real Visual Navigation with Following Foreground Cues FGBG (foreground-background) pytorch package for defining and training model

Klaas Kelchtermans 1 Jun 02, 2022
Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation'

OD-Rec Codes for SIGIR'22 Paper 'On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation' Paper, saved teacher models and Andro

Xin Xia 11 Nov 22, 2022
MiniSom is a minimalistic implementation of the Self Organizing Maps

MiniSom Self Organizing Maps MiniSom is a minimalistic and Numpy based implementation of the Self Organizing Maps (SOM). SOM is a type of Artificial N

Giuseppe Vettigli 1.2k Jan 03, 2023
Self-supervised Product Quantization for Deep Unsupervised Image Retrieval - ICCV2021

Self-supervised Product Quantization for Deep Unsupervised Image Retrieval Pytorch implementation of SPQ Accepted to ICCV 2021 - paper Young Kyun Jang

Young Kyun Jang 71 Dec 27, 2022
Adversarial Attacks on Probabilistic Autoregressive Forecasting Models.

Attack-Probabilistic-Models This is the source code for Adversarial Attacks on Probabilistic Autoregressive Forecasting Models. This repository contai

SRI Lab, ETH Zurich 25 Sep 14, 2022
Code, Models and Datasets for OpenViDial Dataset

OpenViDial This repo contains downloading instructions for the OpenViDial dataset in 《OpenViDial: A Large-Scale, Open-Domain Dialogue Dataset with Vis

119 Dec 08, 2022
[NeurIPS 2021] Galerkin Transformer: a linear attention without softmax

[NeurIPS 2021] Galerkin Transformer: linear attention without softmax Summary A non-numerical analyst oriented explanation on Toward Data Science abou

Shuhao Cao 159 Dec 20, 2022
Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing"

One-Shot Free-View Neural Talking Head Synthesis Unofficial pytorch implementation of paper "One-Shot Free-View Neural Talking-Head Synthesis for Vide

ZLH 406 Dec 23, 2022
A framework for multi-step probabilistic time-series/demand forecasting models

JointDemandForecasting.py A framework for multi-step probabilistic time-series/demand forecasting models File stucture JointDemandForecasting contains

Stanford Intelligent Systems Laboratory 3 Sep 28, 2022
Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Chen Guo 58 Dec 24, 2022
Simple PyTorch hierarchical models.

A python package adding basic hierarchal networks in pytorch for classification tasks. It implements a simple hierarchal network structure based on feed-backward outputs.

Rajiv Sarvepalli 5 Mar 06, 2022
A general, feasible, and extensible framework for classification tasks.

Pytorch Classification A general, feasible and extensible framework for 2D image classification. Features Easy to configure (model, hyperparameters) T

Eugene 26 Nov 22, 2022
Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

Object DGCNN & DETR3D This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110

Wang, Yue 539 Jan 07, 2023
Video Contrastive Learning with Global Context

Video Contrastive Learning with Global Context (VCLR) This is the official PyTorch implementation of our VCLR paper. Install dependencies environments

143 Dec 26, 2022
This repository contains the code for the paper "PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization"

PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization News: [2020/05/04] Added EGL rendering option for training data g

Shunsuke Saito 1.5k Jan 03, 2023
Koopman operator identification library in Python

pykoop pykoop is a Koopman operator identification library written in Python. It allows the user to specify Koopman lifting functions and regressors i

DECAR Systems Group 34 Jan 04, 2023
Code release for "COTR: Correspondence Transformer for Matching Across Images"

COTR: Correspondence Transformer for Matching Across Images This repository contains the inference code for COTR. We plan to release the training code

UBC Computer Vision Group 360 Jan 06, 2023
A unified 3D Transformer Pipeline for visual synthesis

Overview This is the official repo for the paper: "NÜWA: Visual Synthesis Pre-training for Neural visUal World creAtion". NÜWA is a unified multimodal

Microsoft 2.6k Jan 03, 2023