Learning Time-Critical Responses for Interactive Character Control

Overview

Learning Time-Critical Responses for Interactive Character Control

teaser

Abstract

This code implements the paper Learning Time-Critical Responses for Interactive Character Control. This system implements teacher-student framework to learn time-critically responsive policies, which guarantee the time-to-completion between user inputs and their associated responses regardless of the size and composition of the motion databases. This code is written in java and Python, based on Tensorflow2.

Publications

Kyungho Lee, Sehee Min, Sunmin Lee, and Jehee Lee. 2021. Learning Time-Critical Responses for Interactive Character Control. ACM Trans. Graph. 40, 4, 147. (SIGGRAPH 2021)

Project page: http://mrl.snu.ac.kr/research/ProjectAgile/Agile.html

Paper: http://mrl.snu.ac.kr/research/ProjectAgile/AGILE_2021_SIGGRAPH_author.pdf

Youtube: https://www.youtube.com/watch?v=rQKuvxg5ZHc

How to install

This code is implemented with Java and Python, and was developed using Eclipse on Windows. A Windows 64-bit environment is required to run the code.

Requirements

Install JDK 1.8

Java SE Development Kit 8 Downloads

Install Eclipse

Install Eclipse IDE for Java Developers

Install Python 3.6

https://www.python.org/downloads/release/python-368/

Install pydev to Eclipse

https://www.pydev.org/download.html

Install cuda and cudnn 10.0

CUDA Toolkit 10.0 Archive

NVIDIA cuDNN

Install Visual C++ Redistributable for VS2012

Laplacian Motion Editing(PmQmJNI.dll) is implemented in C++, and VS2012 is required to run it.

Visual C++ Redistributable for Visual Studio 2012 Update 4

Install JEP(Java Embedded Python)

Java Embedded Python

This library requires a part of the Visual Studio installation. I don't know exactly which ones are needed, but I'm guessing .net framework 3.5, VC++ 2015.3 v14.00(v140). Installing Visual Studio 2017 or later may be helpful.

Install Tensoflow 1.14.0

pip install tensorflow-gpu==1.14.0

Install this repository

We recommend downloading through Git in Eclipse environment.

  1. Open Git Perspective in Elcipse
  2. Paste repository url and clone repository ( 'https://git.ncsoft.net/scm/private_khlee/private-khlee-test.git' )
  3. Select all projects in Working Tree
  4. Right click and select Import Projects, and Import existing Eclipse projects.

Or you can just download the repository as Zip file and extract it, and import it using File->Import->General->Existing Projects into Workspace in Eclipse.

Install third party library

This code uses Interactive Character Animation by Learning Multi-Objective Control for learning the student policy.

Download required third pary library files(ThirdPartyDlls.zip) and extract it to mrl.motion.critical folder.

Dataset

The entire data used in the paper cannot be published due to copyright issues. This repository contains only minimal motion dataset for algorithm validation. SNU Motion Database was used for martial arts movements, CMU Motion Database was used for locomotion.

How to run

Eclipse

All of the instructions below are assumed to be executed based on Eclipse. Executable java files are grouped in package mrl.motion.critical.run of project mrl.motion.critical.

  • You can directly open source file with Ctrl+Shift+R
  • You can run the currently open source file with Ctrl+F11.
  • You can configure program arguments in Run->Run Configurations menu.

Pre-trained student policy

You can see the pre-trained network by running RuntimeMartialArtsControlModule.java. Pre-trained network file is located at mrl.python.neural\train\martial_arts_sp_da

  • 1, 2 : walk, run
  • 3,4,5,6 : martial arts actions
  • q,w,e,r,t : control critical response time

How to train

  1. Data Annotation & Configuration
    • You can check motion data list and annotation information by executing MAnnotationRun.java.
  2. Model Configuration
    • Action list, critical response time of each action, user input model and error metric is defined at MartialArtsConfig.java
  3. Preprocessing
    • You can precompute data table for pruning by executing DP_Preprocessing.java
    • The data file will be located at mrl.motion.critical\output\dp_cache
  4. Training teacher policy
    • You can train teacher policy by executing LearningTeacherPolicy.java
    • The result will be located at mrl.motion.critical\train_rl
  5. Training data for student policy
    • You can generate training data for student policy by executing StudentPolicyDataGeneration.java
    • The result will be located at mrl.python.neural\train
  6. Training student policy
    • You can train student policy by executing mrl.python.neural\train_rl.py
    • You need to set program arguments in Run->Run Configurations menu.
      • arguments format :
      • ex) martial_arts_sp new 0.0001
  7. Running student policy
    • You can see the trained student policy by running RuntimeMartialArtsControlModule.java.
    • This class will be load student policy located at mrl.python.neural\train.
Owner
Movement Research Lab
Our research group explores new ways of understanding, representing, and animating human movements.
Movement Research Lab
Download & Install mods for your favorit game with a few simple clicks

Husko's SteamWorkshop Downloader 🔴 IMPORTANT ❗ 🔴 The Tool is currently being rewritten so updates will be slow and only on the dev branch until it i

Husko 67 Nov 25, 2022
Mask-invariant Face Recognition through Template-level Knowledge Distillation

Mask-invariant Face Recognition through Template-level Knowledge Distillation This is the official repository of "Mask-invariant Face Recognition thro

Fadi Boutros 35 Dec 06, 2022
Official implementation of "Dynamic Anchor Learning for Arbitrary-Oriented Object Detection" (AAAI2021).

DAL This project hosts the official implementation for our AAAI 2021 paper: Dynamic Anchor Learning for Arbitrary-Oriented Object Detection [arxiv] [c

ming71 215 Nov 28, 2022
Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning, CVPR 2021

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning By Zhenda Xie*, Yutong Lin*, Zheng Zhang, Yue Ca

Zhenda Xie 293 Dec 20, 2022
A PyTorch implementation of SlowFast based on ICCV 2019 paper "SlowFast Networks for Video Recognition"

SlowFast A PyTorch implementation of SlowFast based on ICCV 2019 paper SlowFast Networks for Video Recognition. Requirements Anaconda PyTorch conda in

Hao Ren 8 Dec 23, 2022
Joint Channel and Weight Pruning for Model Acceleration on Mobile Devices

Joint Channel and Weight Pruning for Model Acceleration on Mobile Devices Abstract For practical deep neural network design on mobile devices, it is e

11 Dec 30, 2022
NeuroFind - A solution to the to the Task given by the Oberseminar of Messtechnik Institute of TU Dresden in 2021

NeuroFind A solution to the to the Task given by the Oberseminar of Messtechnik

1 Jan 20, 2022
This is a collection of simple PyTorch implementations of neural networks and related algorithms. These implementations are documented with explanations,

labml.ai Deep Learning Paper Implementations This is a collection of simple PyTorch implementations of neural networks and related algorithms. These i

labml.ai 16.4k Jan 09, 2023
Face Mask Detector by live camera using tensorflow-keras, openCV and Python

Face Mask Detector 😷 by Live Camera Detecting masked or unmasked faces by live camera with percentange of mask occupation About Project: This an Arti

Karan Shingde 2 Apr 04, 2022
Implementation of the paper Recurrent Glimpse-based Decoder for Detection with Transformer.

REGO-Deformable DETR By Zhe Chen, Jing Zhang, and Dacheng Tao. This repository is the implementation of the paper Recurrent Glimpse-based Decoder for

Zhe Chen 33 Nov 30, 2022
Pytorch for Segmentation

Pytorch for Semantic Segmentation This repo has been deprecated currently and I will not maintain it. Meanwhile, I strongly recommend you can refer to

ycszen 411 Nov 22, 2022
Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation

TensorFlow White Paper Notes Features Notes broken down section by section, as well as subsection by subsection Relevant links to documentation, resou

Sam Abrahams 437 Oct 09, 2022
PyTorch-lightning implementation of the ESFW module proposed in our paper Edge-Selective Feature Weaving for Point Cloud Matching

Edge-Selective Feature Weaving for Point Cloud Matching This repository contains a PyTorch-lightning implementation of the ESFW module proposed in our

5 Feb 14, 2022
PyTorch Implementation of DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs

DiffGAN-TTS - PyTorch Implementation PyTorch implementation of DiffGAN-TTS: High

Keon Lee 157 Jan 01, 2023
TalkingHead-1KH is a talking-head dataset consisting of YouTube videos

TalkingHead-1KH Dataset TalkingHead-1KH is a talking-head dataset consisting of YouTube videos, originally created as a benchmark for face-vid2vid: On

173 Dec 29, 2022
Code and data for the paper "Hearing What You Cannot See"

Hearing What You Cannot See: Acoustic Vehicle Detection Around Corners Public repository of the paper "Hearing What You Cannot See: Acoustic Vehicle D

TU Delft Intelligent Vehicles 26 Jul 13, 2022
Implementation of the paper titled "Using Sampling to Estimate and Improve Performance of Automated Scoring Systems with Guarantees"

Using Sampling to Estimate and Improve Performance of Automated Scoring Systems with Guarantees Implementation of the paper titled "Using Sampling to

MIDAS, IIIT Delhi 2 Aug 29, 2022
This is the official PyTorch implementation of the paper "TransFG: A Transformer Architecture for Fine-grained Recognition" (Ju He, Jie-Neng Chen, Shuai Liu, Adam Kortylewski, Cheng Yang, Yutong Bai, Changhu Wang, Alan Yuille).

TransFG: A Transformer Architecture for Fine-grained Recognition Official PyTorch code for the paper: TransFG: A Transformer Architecture for Fine-gra

Ju He 307 Jan 03, 2023
Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning

MaCan 4.2k Dec 29, 2022
CVPR 2021 Official Pytorch Code for UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training

UC2 UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu,

Mingyang Zhou 28 Dec 30, 2022