Some useful blender add-ons for SMPL skeleton's poses and global translation.

Overview

Blender add-ons for SMPL skeleton's poses and trans

There are two blender add-ons for SMPL skeleton's poses and trans.The first is for making an offline visual demo.The second is to make a live visual demo.

Offline Motion Capture

Show one demo image

You need to follow these steps to use the first add-on.

  1. Open a blender file and import the fbx file of a model with SMPL skeleton

I downloaded a model on Mixamo that happened to be an SMPL skeleton, and I changed the name of each bone so that the code could find the bones and act on them.You can find it in sources folder.

If you want to use your own model, make sure that the skeleton is SMPL skeleton and that the bones are named the same as in the code.

Also, you need to change the name of model to Armature1.

  1. Prepare the pkl file

You need to prepare a pkl file, datas can be read by pickling.load(f)['model'], where f stands for pkl file.

Datas are lists of [N,75], where N stands for the number of frames.The 75 elements represent SMPL's poses and global translation.The first 72 elements are SMPL poses, and the last 3 elements are global translation.

You then modify the file path in the code so that it can read your pkl file.

  1. Install the addon-on
  2. Make the skeleton and model active,then presh ctrl+R to run the add-on

Inserting keyframes takes a long time, you have to wait, and it's best not to insert more than 10000 frames at a time.

Real Time Motion Capture

ROMP outputs SMPL's poses and trans more than 20 times per second. With ROMP and the plugin, the output of ROMP can be input into Blender's skeleton in real time to achieve the effect of human-driven animation characters.

Of course, every model that outputs SMPL's poses and trans can use this plugin to drive animation characters in Blender.

Comments
  • How to run .fbx file to control the charater

    How to run .fbx file to control the charater

    Hello. I have successfully run the demo of ROMP, which exported .fbx file. And currently I want to use the .fbx to control the character. Can you provide steps for the video demo? I can only see the camera one

    opened by CheungBH 31
  • [simple-romp] How to use it?

    [simple-romp] How to use it?

    I tried to use it with simple-romp but it did not work. I already created an issue at the ROMP repository and described my problem here and here in detail.

    The author of the ROMP repository answered there:

    About live blender driving, please refer to this repo. https://github.com/yanch2116/CharacterDriven-BlenderAddon My colleague is responsible for maintaining this funciton now. Best regard.

    and closed the issue.

    @yanch2116 So how to solve it?

    My goal is to send the positions and quaternions of ROMP (or any other SMPL based solution) over the VMC protocol to other application (not Blender). Any ideas, whats the best way to do it?

    opened by vivi90 26
  • Could I ask for a detailed instruction on how to change the 3D Character?

    Could I ask for a detailed instruction on how to change the 3D Character?

    Also, I would like to know if there is a way to replace the 3D Character before I run the script and process the video so that I don't have to manually change it every time?

    opened by XXZhe 12
  • Can't connect ROMP into addon

    Can't connect ROMP into addon

    I use ROMP v1.1 on Windows machine. But CDBA works with version 1.0 of ROMP I read installation and using documentation of ROMP v1.0 but i can't figured out all How can i use this addon properly? I'm not good at programming i'm animator and interested in your project. Can you write steps how to install and use ROMP with this addon,please?

    image

    opened by hasanleiva 11
  • Hi, Looks like change another mixamo character not work

    Hi, Looks like change another mixamo character not work

    I have ROMP driven SMPL model looks OK, but when using CDBA (I using script locally) driven mixamo model, result looks wrong:

    image

    I am using this bone mapper in your repo:

    bones_mixamo_smpl_mapper = {
        "Hips": "Pelvis",
        "LeftUpLeg": "L_Hip",
        "RightUpLeg": "R_Hip",
        "Spine2": "Spine3",
        "Spine1": "Spine2",
        "Spine": "Spine1",
        "LeftLeg": "L_Knee",
        "RightLeg": "R_Knee",
        "LeftFoot": "L_Ankle",
        "RightFoot": "R_Ankle",
        "LeftToeBase": "L_Foot",
        "RightToeBase": "R_Foot",
        "Neck": "Neck",
        "LeftShoulder": "L_Collar",
        "RightShoulder": "R_Collar",
        "Head": "Head",
        "LeftArm": "L_Shoulder",
        "RightArm": "R_Shoulder",
        "LeftForeArm": "L_Elbow",
        "RightForeArm": "R_Elbow",
        "LeftHand": "L_Wrist",
        "RightHand": "R_Wrist",
        "LeftHandIndex1": "L_Hand",
        "LeftHandMiddle1": "L_Hand",
        "RightHandMiddle1": "R_Hand",
        "RightHandIndex1": "R_Hand",
    }
    bones_smpl_mixamo_mapper = {v: k for k, v in bones_mixamo_smpl_mapper.items()}
    bone_name_from_index_character = {
        k: bones_smpl_mixamo_mapper[v] for k, v in bone_name_from_index.items()
    }
    
    

    Do u know why?

    also the hand look not right.

    image

    opened by jinfagang 11
  • Detection variable: outputs = {'poses': poses, 'trans': trans[0]}

    Detection variable: outputs = {'poses': poses, 'trans': trans[0]}

    Hey, Yanchxx,

    May I ask, what is the output variable, I think "trans" is the translation between camera and object.

    What are the poses, is it the location x, y, z, or rotation x, y, z, degrees?

    Thank you 👍

    opened by zhangby2085 10
  • 导入fbx时骨骼方向乱了

    导入fbx时骨骼方向乱了

    @yanch2116 你好,非常酷的工作!我基本把整个项目跑通了,但我有两个问题想进一步请教一下你。 、 1.导入fbx时骨骼的方向错乱了,参见:https://www.bilibili.com/read/cv2520452 。但是这个文章里的方法没有完全解决方向错乱的问题,所以我想请问一下你,你是怎么解决这个问题的呢? 2.我想请教一下,如何将编辑骨架让其和smpl的骨架一致呢? 望不吝赐教!万分感谢!

    opened by syguan96 8
  • Use keyframes to prevent pose shaking

    Use keyframes to prevent pose shaking

    I found that the avatar will shake drastically when running the webcam demo. And I set the mode =1 to insert keyframe record the webcam results for playback.

    Now, if we set one more condition for inserting keyframe, only the frame_idx % 3 == 0 or frame_idx % 5 == 0. This would allow the avatar to move along these keyframes much smoother.

    However, is there a way to let the webcam demo runs in real-time using the keyframe strategy I said? This skip keyframe strategy only seems to work with the recorded playback. The character is still moving on each frame when we are actually running in real-time.

    opened by ZhengdiYu 4
  • Can I rotate the scene while running scripts?

    Can I rotate the scene while running scripts?

    Hi, I'm able to run to demo with blender now, but it turns out that the view is locked while the script is running.

    If there's a way to enable rotation?

    opened by anzisheng 4
  • Which version of Blender are you using?

    Which version of Blender are you using?

    I met the following error while running Beta.blend. I'm using blender 2.83.9. What's the expected version?

    Read blend: E:\Workspace\blender_test\addons\CharacterDriven-BlenderAddon-master\blender\Beta.blend 0 meshes freed Error: File written by newer Blender binary (290.0), expect loss of data!

    opened by sylyt62 4
  • multiple people

    multiple people

    Hey, yanch2116, very impressive job done in visualizing 3D characters. I tested and it works, one question to ask, the code romp_server.py has the setting.show_largest=True, I am thinking, what is the condition with multiple people in the webcamera, when there are many people, the blender character keeps on shifting. Are they ways to solve this? 1, keep on the object tracker in single object_ID 2, visualize multiple 3D characters as input in the camera

    opened by zhangby2085 3
Owner
犹在镜中
犹在镜中
Code for ICML 2021 paper: How could Neural Networks understand Programs?

OSCAR This repository contains the source code of our ICML 2021 paper How could Neural Networks understand Programs?. Environment Run following comman

Dinglan Peng 115 Dec 17, 2022
[ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark Accepted as a spotlight paper at ICLR 2021. Table of content File structure Prerequi

72 Jan 03, 2023
AI Based Smart Exam Proctoring Package

AI Based Smart Exam Proctoring Package It takes image (base64) as input: Provide Output as: Detection of Mobile phone. Detection of More than 1 person

NARENDER KESWANI 3 Sep 09, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

VITA 112 Nov 07, 2022
CNNs for Sentence Classification in PyTorch

Introduction This is the implementation of Kim's Convolutional Neural Networks for Sentence Classification paper in PyTorch. Kim's implementation of t

Shawn Ng 956 Dec 19, 2022
Omnidirectional Scene Text Detection with Sequential-free Box Discretization (IJCAI 2019). Including competition model, online demo, etc.

Box_Discretization_Network This repository is built on the pytorch [maskrcnn_benchmark]. The method is the foundation of our ReCTs-competition method

Yuliang Liu 266 Nov 24, 2022
[PNAS2021] The neural architecture of language: Integrative modeling converges on predictive processing

The neural architecture of language: Integrative modeling converges on predictive processing Code accompanying the paper The neural architecture of la

Martin Schrimpf 36 Dec 01, 2022
YOLOv5 + ROS2 object detection package

YOLOv5-ROS YOLOv5 + ROS2 object detection package This program changes the input of detect.py (ultralytics/yolov5) to sensor_msgs/Image of ROS2. Requi

Ar-Ray 23 Dec 19, 2022
A Neural Net Training Interface on TensorFlow, with focus on speed + flexibility

Tensorpack is a neural network training interface based on TensorFlow. Features: It's Yet Another TF high-level API, with speed, and flexibility built

Tensorpack 6.2k Jan 01, 2023
Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

Stereo Radiance Fields (SRF): Learning View Synthesis for Sparse Views of Novel Scenes

111 Dec 29, 2022
Sample and Computation Redistribution for Efficient Face Detection

Introduction SCRFD is an efficient high accuracy face detection approach which initially described in Arxiv. Performance Precision, flops and infer ti

Sajjad Aemmi 13 Mar 05, 2022
Blind visual quality assessment on 360° Video based on progressive learning

Blind visual quality assessment on omnidirectional or 360 video (ProVQA) Blind VQA for 360° Video via Progressively Learning from Pixels, Frames and V

5 Jan 06, 2023
Official implementation of the paper 'Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution'

DASR Paper Efficient and Degradation-Adaptive Network for Real-World Image Super-Resolution Jie Liang, Hui Zeng, and Lei Zhang. In arxiv preprint. Abs

81 Dec 28, 2022
The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store development.

The goal of the exercises below is to evaluate the candidate knowledge and problem solving expertise regarding the main development focuses for the iFood ML Platform team: MLOps and Feature Store dev

George Rocha 0 Feb 03, 2022
Distributionally robust neural networks for group shifts

Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization This code implements the g

151 Dec 25, 2022
Implementation of Ag-Grid component for Streamlit

streamlit-aggrid AgGrid is an awsome grid for web frontend. More information in https://www.ag-grid.com/. Consider purchasing a license from Ag-Grid i

Pablo Fonseca 556 Dec 31, 2022
Code for paper "Context-self contrastive pretraining for crop type semantic segmentation"

Code for paper "Context-self contrastive pretraining for crop type semantic segmentation" Setting up a python environment Follow the instruction in ht

Michael Tarasiou 11 Oct 09, 2022
Sky Computing: Accelerating Geo-distributed Computing in Federated Learning

Sky Computing Introduction Sky Computing is a load-balanced framework for federated learning model parallelism. It adaptively allocate model layers to

HPC-AI Tech 72 Dec 27, 2022
Pgn2tex - Scripts to convert pgn files to latex document. Useful to build books or pdf from pgn studies

Pgn2Latex (WIP) A simple script to make pdf from pgn files and studies. It's sti

12 Jul 23, 2022
O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning (CoRL 2021)

O2O-Afford: Annotation-Free Large-Scale Object-Object Affordance Learning Object-object Interaction Affordance Learning. For a given object-object int

Kaichun Mo 26 Nov 04, 2022