SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021)

Related tags

Deep LearningSCALE
Overview

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements (CVPR 2021)

Paper

This repository contains the official PyTorch implementation of the CVPR 2021 paper:

SCALE: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements
Qianli Ma, Shunsuke Saito, Jinlong Yang, Siyu Tang, and Michael. J. Black
Full paper | Video | Project website | Poster

Installation

  • The code has been tested on Ubuntu 18.04, python 3.6 and CUDA 10.0.

  • First, in the folder of this SCALE repository, run the following commands to create a new virtual environment and install dependencies:

    python3 -m venv $HOME/.virtualenvs/SCALE
    source $HOME/.virtualenvs/SCALE/bin/activate
    pip install -U pip setuptools
    pip install -r requirements.txt
    mkdir checkpoints
  • Install the Chamfer Distance package (MIT license, taken from this implementation). Note: the compilation is verified to be successful under CUDA 10.0, but may not be compatible with later CUDA versions.

    cd chamferdist
    python setup.py install
    cd ..
  • You are now good to go with the next steps! All the commands below are assumed to be run from the SCALE repository folder, within the virtual environment created above.

Run SCALE

  • Download our pre-trained model weights, unzip it under the checkpoints folder, such that the checkpoints' path is /checkpoints/SCALE_demo_00000_simuskirt/.

  • Download the packed data for demo, unzip it under the data/ folder, such that the data file paths are /data/packed/00000_simuskirt//.

  • With the data and pre-trained model ready, the following code will generate a sequence of .ply files of the teaser dancing animation in results/saved_samples/SCALE_demo_00000_simuskirt:

    python main.py --config configs/config_demo.yaml
  • To render images of the generated point sets, run the following command:

    python render/o3d_render_pcl.py --model_name SCALE_demo_00000_simuskirt

    The images (with both the point normal coloring and patch coloring) will be saved under results/rendered_imgs/SCALE_demo_00000_simuskirt.

Train SCALE

Training demo with our data examples

  • Assume the demo training data is downloaded from the previous step under data/packed/. Now run:

    python main.py --config configs/config_train_demo.yaml

    The training will start!

  • The code will also save the loss curves in the TensorBoard logs under tb_logs//SCALE_train_demo_00000_simuskirt.

  • Examples from the validation set at every 10 (can be set) epoch will be saved at results/saved_samples/SCALE_train_demo_00000_simuskirt/val.

  • Note: the training data provided above are only for demonstration purposes. Due to their very limited number of frames, they will not likely yield a satisfying model. Please refer to the README files in the data/ and lib_data/ folders for more information on how to process your customized data.

Training with your own data

We provide example codes in lib_data/ to assist you in adapting your own data to the format required by SCALE. Please refer to lib_data/README for more details.

License

Software Copyright License for non-commercial scientific research purposes. Please read carefully the terms and conditions and any accompanying documentation before you download and/or use the SCALE code, including the scripts, animation demos and pre-trained models. By downloading and/or using the Model & Software (including downloading, cloning, installing, and any other use of this GitHub repository), you acknowledge that you have read these terms and conditions, understand them, and agree to be bound by them. If you do not agree with these terms and conditions, you must not download and/or use the Model & Software. Any infringement of the terms of this agreement will automatically terminate your rights under this License.

The SMPL body related files (including assets/{smpl_faces.npy, template_mesh_uv.obj} and the UV masks under assets/uv_masks/) are subject to the license of the SMPL model. The provided demo data (including the body pose and the meshes of clothed human bodies) are subject to the license of the CAPE Dataset. The Chamfer Distance implementation is subject to its original license.

Citations

@inproceedings{Ma:CVPR:2021,
  title = {{SCALE}: Modeling Clothed Humans with a Surface Codec of Articulated Local Elements},
  author = {Ma, Qianli and Saito, Shunsuke and Yang, Jinlong and Tang, Siyu and Black, Michael J.},
  booktitle = {Proceedings IEEE/CVF Conf.~on Computer Vision and Pattern Recognition (CVPR)},
  month = jun,
  year = {2021},
  month_numeric = {6}
}
Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation.

Understanding Minimum Bayes Risk Decoding This repo provides code and documentation for the following paper: Müller and Sennrich (2021): Understanding

ZurichNLP 13 May 01, 2022
Blind visual quality assessment on 360° Video based on progressive learning

Blind visual quality assessment on omnidirectional or 360 video (ProVQA) Blind VQA for 360° Video via Progressively Learning from Pixels, Frames and V

5 Jan 06, 2023
A library for using chemistry in your applications

Chemistry in python Resources Used The following items are not made by me! Click the words to go to the original source Periodic Tab Json - Used in -

Tech Penguin 28 Dec 17, 2021
[ICML 2020] "When Does Self-Supervision Help Graph Convolutional Networks?" by Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen

When Does Self-Supervision Help Graph Convolutional Networks? PyTorch implementation for When Does Self-Supervision Help Graph Convolutional Networks?

Shen Lab at Texas A&M University 106 Nov 11, 2022
EgoNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale

EgonNN: Egocentric Neural Network for Point Cloud Based 6DoF Relocalization at the City Scale Paper: EgoNN: Egocentric Neural Network for Point Cloud

19 Sep 20, 2022
Dynamic Realtime Animation Control

Our project is targeted at making an application that dynamically detects the user’s expressions and gestures and projects it onto an animation software which then renders a 2D/3D animation realtime

Harsh Avinash 10 Aug 01, 2022
We are More than Our JOints: Predicting How 3D Bodies Move

We are More than Our JOints: Predicting How 3D Bodies Move Citation This repo contains the official implementation of our paper MOJO: @inproceedings{Z

72 Oct 20, 2022
LSTC: Boosting Atomic Action Detection with Long-Short-Term Context

LSTC: Boosting Atomic Action Detection with Long-Short-Term Context This Repository contains the code on AVA of our ACM MM 2021 paper: LSTC: Boosting

Tencent YouTu Research 9 Oct 11, 2022
Nodule Generation Algorithm Baseline and template code for node21 generation track

Nodule Generation Algorithm This codebase implements a simple baseline model, by following the main steps in the paper published by Litjens et al. for

node21challenge 10 Apr 21, 2022
A two-stage U-Net for high-fidelity denoising of historical recordings

A two-stage U-Net for high-fidelity denoising of historical recordings Official repository of the paper (not submitted yet): E. Moliner and V. Välimäk

Eloi Moliner Juanpere 57 Jan 05, 2023
ColossalAI-Benchmark - Performance benchmarking with ColossalAI

Benchmark for Tuning Accuracy and Efficiency Overview The benchmark includes our

HPC-AI Tech 31 Oct 07, 2022
Data Augmentation Using Keras and Python

Data-Augmentation-Using-Keras-and-Python Data augmentation is the process of increasing the number of training dataset. Keras library offers a simple

Happy N. Monday 3 Feb 15, 2022
Learning Neural Network Subspaces

Learning Neural Network Subspaces Welcome to the codebase for Learning Neural Network Subspaces by Mitchell Wortsman, Maxwell Horton, Carlos Guestrin,

Apple 117 Nov 17, 2022
ProjectOxford-ClientSDK - This repo has moved :house: Visit our website for the latest SDKs & Samples

This project has moved 🏠 We heard your feedback! This repo has been deprecated and each project has moved to a new home in a repo scoped by API and p

Microsoft 970 Nov 28, 2022
YuNetのPythonでのONNX、TensorFlow-Lite推論サンプル

YuNet-ONNX-TFLite-Sample YuNetのPythonでのONNX、TensorFlow-Lite推論サンプルです。 TensorFlow-LiteモデルはPINTO0309/PINTO_model_zoo/144_YuNetのものを使用しています。 Requirement Op

KazuhitoTakahashi 8 Nov 17, 2021
A collection of semantic image segmentation models implemented in TensorFlow

A collection of semantic image segmentation models implemented in TensorFlow. Contains data-loaders for the generic and medical benchmark datasets.

bobby 16 Dec 06, 2019
SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement

SporeAgent: Reinforced Scene-level Plausibility for Object Pose Refinement This repository implements the approach described in SporeAgent: Reinforced

Dominik Bauer 5 Jan 02, 2023
TensorFlow port of PyTorch Image Models (timm) - image models with pretrained weights.

TensorFlow-Image-Models Introduction Usage Models Profiling License Introduction TensorfFlow-Image-Models (tfimm) is a collection of image models with

Martins Bruveris 227 Dec 20, 2022
Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.

HiddenLayer A lightweight library for neural network graphs and training metrics for PyTorch, Tensorflow, and Keras. HiddenLayer is simple, easy to ex

Waleed 1.7k Dec 31, 2022
Neural Scene Graphs for Dynamic Scene (CVPR 2021)

Implementation of Neural Scene Graphs, that optimizes multiple radiance fields to represent different objects and a static scene background. Learned representations can be rendered with novel object

151 Dec 26, 2022