HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR. CVPR 2022

Related tags

Deep LearningHSC4D
Overview

HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR. CVPR 2022

[Project page | Video]

Getting start

Dataset (Click here to download)

The large indoor and outdoor scenes in our dataset. Left: a climbing gym (1200 m2). Middle: a lab building with an outside courtyard 4000 m2. Right: a loop road scene 4600 m2

Data structure

Dataset root/
├── [Place_holder]/
|  ├── [Place_holder].bvh     # MoCap data from Noitom Axis Studio (PNStudio)
|  ├── [Place_holder]_pos.csv # Every joint's roration, generated from `*_bvh`
|  ├── [Place_holder]_rot.csv # Every joint's translation, generated from `*_bvh`
|  ├── [Place_holder].pcap    # Raw data from the LiDAR
|  └── [Place_holder]_lidar_trajectory.txt  # N×9 format file
├── ...
|
└── scenes/
   ├── [Place_holder].pcd
   ├── [Place_holder]_ground.pcd
   ├── ...
   └── ...
  1. Place_holder can be replaced to campus_raod, climbing_gym, and lab_building.
  2. *_lidar_trajectory.txt is generated by our Mapping method and manually calibrated with corresponding scenes.
  3. *_bvh and *_pcap are raw data from sensors. They will not be used in the following steps.
  4. You can test your SLAM algorithm by using *_pcap captured from Ouster1-64 with 1024×20Hz.

Preparation

  • Download basicModel_neutral_lbs_10_207_0_v1.0.0.pkl and put it in smpl directory.
  • Downloat the dataset and modify dataset_root and data_name in configs/sample.cfg.
dataset_root = /your/path/to/datasets
data_name = campus_road # or lab_building, climbing_gym

Requirement

Our code is tested under:

  • Ubuntu: 18.04
  • Python: 3.8
  • CUDA: 11.0
  • Pytorch: 1.7.0

Installation

conda create -n hsc4d python=3.8
conda install pytorch==1.7.0 torchvision==0.8.0 torchaudio==0.7.0 cudatoolkit=11.0 -c pytorch
pip install open3d chumpy scipy configargparse matplotlib pathlib pandas opencv-python torchgeometry tensorboardx
  • Note: For mask conversion compatibility in PyTorch 1.7.0, you need to manually edit the source file in torchgeometry. Follow the guide here
  $ vi /home/dyd/software/anaconda3/envs/hsc4d/lib/python3.8/site-packages/torchgeometry/core/conversions.py

  # mask_c1 = mask_d2 * (1 - mask_d0_d1)
  # mask_c2 = (1 - mask_d2) * mask_d0_nd1
  # mask_c3 = (1 - mask_d2) * (1 - mask_d0_nd1)
  mask_c1 = mask_d2 * ~(mask_d0_d1)
  mask_c2 = ~(mask_d2) * mask_d0_nd1
  mask_c3 = ~(mask_d2) * ~(mask_d0_nd1)
  • Note: When nvcc fatal error occurs.
export TORCH_CUDA_ARCH_LIST="8.0" #nvcc complier error. nvcc fatal: Unsupported gpu architecture 

Preprocess

  • Transfer Mocap data [Optional, data provided]

    pip install bvhtoolbox # https://github.com/OlafHaag/bvh-toolbox
    bvh2csv /your/path/to/campus_road.bvh
    • Output: campus_road_pos.csv, campus_road_rot.csv
  • LiDAR mapping [Optional, data provided]

    • Process pcap file
      cd initialize
      pip install ouster-sdk 
      python ouster_pcap_to_txt.py -P /your/path/to/campus_road.pcap [-S start_frame] [-E end_frame]
    • Run your Mapping/SLAM algorithm.

    • Coordinate alignment (About 5 degree error after this step)

      1. The human stands as an A-pose before capture, and the human's face direction is regarded as scene's $Y$-axis direction.
      2. Rotate the scene cloud to make its $Z$-axis perpendicular to the starting position's ground.
      3. Translate the scene to make its origin to the first SMPL model's origin on the ground.
      4. LiDAR's ego motion $T^W$ and $R^W$ are translated and rotated as the scene does.
    • Output: campus_road_lidar_trajectory.txt, scenes/campus_road.pcd

  • Data preprocessing for optimization.

    python preprocess.py --dataset_root /your/path/to/datasets -fn campus_road -D 0.1

Data fusion

To be added

Data optimization

python main.py --config configs/sample.cfg

Visualization

To be added

Copyright

The HSC4D dataset is published under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License.You must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license. Contact us if you are interested in commercial usage.

Bibtex

@misc{dai2022hsc4d,
    title={HSC4D: Human-centered 4D Scene Capture in Large-scale Indoor-outdoor Space Using Wearable IMUs and LiDAR},
    author={Yudi Dai and Yitai Lin and Chenglu Wen and Siqi Shen and Lan Xu and Jingyi Yu and Yuexin Ma and Cheng Wang},
    year={2022},
    eprint={2203.09215},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}
Backend code to use MCPI's python API to make infinite worlds with custom generation

inf-mcpi Backend code to use MCPI's python API to make infinite worlds with custom generation Does not save player-placed blocks! Generation is still

5 Oct 04, 2022
PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

PointCloud Annotation Tools, support to label object bound box, ground, lane and kerb

halo 368 Dec 06, 2022
Asymmetric Bilateral Motion Estimation for Video Frame Interpolation, ICCV2021

ABME (ICCV2021) Junheum Park, Chul Lee, and Chang-Su Kim Official PyTorch Code for "Asymmetric Bilateral Motion Estimation for Video Frame Interpolati

Junheum Park 86 Dec 28, 2022
SegNet including indices pooling for Semantic Segmentation with tensorflow and keras

SegNet SegNet is a model of semantic segmentation based on Fully Comvolutional Network. This repository contains the implementation of learning and te

Yuta Kamikawa 172 Dec 23, 2022
ADGAN - The Implementation of paper Controllable Person Image Synthesis with Attribute-Decomposed GAN

ADGAN - The Implementation of paper Controllable Person Image Synthesis with Attribute-Decomposed GAN CVPR 2020 (Oral); Pose and Appearance Attributes Transfer;

Men Yifang 400 Dec 29, 2022
The code from the paper Character Transformations for Non-Autoregressive GEC Tagging

Character Transformations for Non-Autoregressive GEC Tagging Milan Straka, Jakub Náplava, Jana Straková Charles University Faculty of Mathematics and

ÚFAL 5 Dec 10, 2022
Code for generating a single image pretraining dataset

Single Image Pretraining of Visual Representations As shown in the paper A critical analysis of self-supervision, or what we can learn from a single i

Yuki M. Asano 12 Dec 19, 2022
This repository contains small projects related to Neural Networks and Deep Learning in general.

ILearnDeepLearning.py Description People say that nothing develops and teaches you like getting your hands dirty. This repository contains small proje

Piotr Skalski 1.2k Dec 22, 2022
Goal of the project : Detecting Temporal Boundaries in Sign Language videos

MVA RecVis course final project : Goal of the project : Detecting Temporal Boundaries in Sign Language videos. Sign language automatic indexing is an

Loubna Ben Allal 6 Dec 21, 2022
Code for SIMMC 2.0: A Task-oriented Dialog Dataset for Immersive Multimodal Conversations

The Second Situated Interactive MultiModal Conversations (SIMMC 2.0) Challenge 2021 Welcome to the Second Situated Interactive Multimodal Conversation

Facebook Research 81 Nov 22, 2022
This repository contains all code and data for the Inside Out Visual Place Recognition task

Inside Out Visual Place Recognition This repository contains code and instructions to reproduce the results for the Inside Out Visual Place Recognitio

15 May 21, 2022
Auto HMM: Automatic Discrete and Continous HMM including Model selection

Auto HMM: Automatic Discrete and Continous HMM including Model selection

Chess_champion 29 Dec 07, 2022
A simplified framework and utilities for PyTorch

Here is Poutyne. Poutyne is a simplified framework for PyTorch and handles much of the boilerplating code needed to train neural networks. Use Poutyne

GRAAL/GRAIL 534 Dec 17, 2022
Weakly Supervised End-to-End Learning (NeurIPS 2021)

WeaSEL: Weakly Supervised End-to-end Learning This is a PyTorch-Lightning-based framework, based on our End-to-End Weak Supervision paper (NeurIPS 202

Auton Lab, Carnegie Mellon University 131 Jan 06, 2023
Action Recognition for Self-Driving Cars

Action Recognition for Self-Driving Cars This repo contains the codes for the 2021 Fall semester project "Action Recognition for Self-Driving Cars" at

VITA lab at EPFL 3 Apr 07, 2022
NAACL2021 - COIL Contextualized Lexical Retriever

COIL Repo for our NAACL paper, COIL: Revisit Exact Lexical Match in Information Retrieval with Contextualized Inverted List. The code covers learning

Luyu Gao 108 Dec 31, 2022
Multi-robot collaborative exploration and mapping through Voronoi partition and DRL in unknown environment

Voronoi Multi_Robot Collaborate Exploration Introduction In the unknown environment, the cooperative exploration of multiple robots is completed by Vo

PeaceWord 6 Nov 22, 2022
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency

Image Crop Analysis This is a repo for the code used for reproducing our Image Crop Analysis paper as shared on our blog post. If you plan to use this

Twitter Research 239 Jan 02, 2023
StarGAN - Official PyTorch Implementation (CVPR 2018)

StarGAN - Official PyTorch Implementation ***** New: StarGAN v2 is available at https://github.com/clovaai/stargan-v2 ***** This repository provides t

Yunjey Choi 5.1k Jan 04, 2023
Black-Box-Tuning - Black-Box Tuning for Language-Model-as-a-Service

Black-Box-Tuning Source code for paper "Black-Box Tuning for Language-Model-as-a

Tianxiang Sun 149 Jan 04, 2023