Free like Freedom

Overview

This is all very much a work in progress! More to come!

( We're working on it though! Stay tuned!)

Installation

  • Open an Anaconda Prompt (in Windows, or any terminal on Mac/Linux) and enter the following comands

conda create -n freemocap-env python=3.7

conda activate freemocap-env

pip install freemocap -v

ipython

import freemocap as fmc
fmc.RunMe() #this is where the magic happens.
2021-06-12_FreeMoCap_Clips_16MB.mp4

Prerequisites -

Required

  • A Python 3.7 environment: We recommend installing Anaconda from here (https://www.anaconda.com/products/individual#Downloads) to create your Python environment.

  • Two or more USB webcams attached to viable USB ports

    • (USB hubs typically don't work)
  • Each recording must (for now) start with an unobstructed view of a Charuco board generated with python commands (or equivalent):

     import cv2
     
     aruco_dict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_4X4_250) #note `cv2.aruco` can be installed via `pip install opencv-contrib-python`
     
     board = cv2.aruco.CharucoBoard_create(7, 5, 1, .8, aruco_dict)
     
     charuco_board_image = board.draw((2000,2000)) #`2000` is the resolution of the resulting image. Increase this number if printing a large board (bigger is better! Esp for large spaces!
     
     cv2.imwrite('charuco_board_image.png',charuco_board_image)
     
    

Optional If you would like to use OpenPose for body tracking, install Cude and the Windows Portable Demo of OpenPose.

Follow the GitHub Repository and/or Join the Discord (https://discord.gg/HX7MTprYsK) for updates!

Stay Tuned for more soon!

Comments
  • Ubuntu Support?

    Ubuntu Support?

    The following diffs were needed to make it work under Linux

    +++ b/freemocap/webcam/camsetup.py
    @@ -21,7 +21,7 @@ class VideoSetup(threading.Thread):
             camName = "Camera" + str(self.camID)
     
             cv2.namedWindow(camName)
    -        cap = cv2.VideoCapture(self.camID, cv2.CAP_DSHOW)
    +        cap = cv2.VideoCapture(self.camID, cv2.CAP_ANY)
             cap.set(cv2.CAP_PROP_FRAME_WIDTH, resWidth)
             cap.set(cv2.CAP_PROP_FRAME_HEIGHT, resHeight)
             cap.set(cv2.CAP_PROP_EXPOSURE, exposure)
    diff --git a/freemocap/webcam/checkcams.py b/freemocap/webcam/checkcams.py
    index dda1348..fb4a6a0 100644
    --- a/freemocap/webcam/checkcams.py
    +++ b/freemocap/webcam/checkcams.py
    @@ -3,7 +3,7 @@ import cv2
     
     
     def TestDevice(source):
    -    cap = cv2.VideoCapture(source, cv2.CAP_DSHOW)
    +    cap = cv2.VideoCapture(source, cv2.CAP_ANY)
         # if cap is None or not cap.isOpened():
         # print('Warning: unable to open video source: ', source)
     
    diff --git a/freemocap/webcam/startcamrecording.py b/freemocap/webcam/startcamrecording.py
    index a0cdec1..011d853 100644
    --- a/freemocap/webcam/startcamrecording.py
    +++ b/freemocap/webcam/startcamrecording.py
    @@ -44,7 +44,7 @@ def CamRecording(
         flag = False
     
         cv2.namedWindow(camID)  # name the preview window for the camera its showing
    -    cam = cv2.VideoCapture(camInput, cv2.CAP_DSHOW)  # create the video capture object
    +    cam = cv2.VideoCapture(camInput, cv2.CAP_ANY)  # create the video capture object
         # if not cam.isOpened():
         #         raise RuntimeError('No camera found at input '+ str(camID))
         # pulling out all the dictionary paramters
    
    

    But while everything works with one camera at a time, it seems like it blocks with two identical cameras

    The error manifests itself outside of your code

    I get

    [  806.191512] uvcvideo: Failed to query (SET_CUR) UVC control 4 on unit 1: -32 (exp. 4).
    

    Here is lsusb

    [email protected]:~/Desktop$ lsusb 
    Bus 001 Device 005: ID 046d:09a4 Logitech, Inc. QuickCam E 3500
    Bus 001 Device 004: ID 046d:09a4 Logitech, Inc. QuickCam E 3500
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 004 Device 002: ID 046d:c542 Logitech, Inc. Wireless Receiver
    Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 002 Device 004: ID 060b:7a16 Solid Year MD800
    Bus 002 Device 003: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Hub
    Bus 002 Device 002: ID 0409:55aa NEC Corp. Hub
    Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    

    do each of the cameras need to be on a separate bus?

    enhancement question 
    opened by kognat-docs 27
  • Multi Camera Calibration Failure

    Multi Camera Calibration Failure

    My team and I are attempting to use FreeMoCap for a system that records eight views in synchrony. We are currently having issues calibrating all eight cameras, which face opposing directions in our hallway set-up, making it difficult for them to see one CharUco board at once. To resolve this, we printed a double-sided CharUco board, allowing us to create a 3D skeleton output; however, the skeleton seems to be both inverted and translated compared to the subject from certain camera angles. We then tried to calibrate our system using hallway cameras on one side, which all face each other and can observe a charUco board simultaneously. Still, we saw the output skeleton as translated and inverted, ruling out the double-sided board as the reason for the translation. Any suggestions on how to calibrate a system set up like this or why the skeletons seem to be inverted/displaced?

    Attached is a link with the videos and data array/calibration files for the 8 camera set-up. https://drive.google.com/drive/folders/1wv4RHzPFDLeIXr9xONC72fhbW3MKQT_q?usp=sharing

    opened by DestroytheCity 18
  • Using Blender from Wrong Session

    Using Blender from Wrong Session

    I started a new FMC session, yet when running Stage 5 FMC keeps using the .blend from my original project instead of the new one i made for it in the right session folder. It then fails saying that the file doesn't exist. Any thoughts? *replaced login with X

    Running sesh 2 from /home/X/bac_mj_ar/FreeMocap_Data ──────────────────────────────────────────────────────────────────────────────── Using blender executable located at: /home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend Skipping Video Recording Skipping Video Syncing Skipping Calibration Skipping 2d point tracking ──────────────────────────────────────────────────────────────────────────────── ────────────────────────────── EXPORTING FILES... ────────────────────────────── ─ Hijacking Blender's file format converters to export FreeMoCap data as vari… ─ ──────────────────────────────────────────────────────────────────────────────── ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/freemoca │ │ p/fmc_runme.py:335 in RunMe │ │ │ │ 332 │ │ │ │ │ │ │ │ │ │ command_str, │ │ 333 │ │ │ │ │ │ │ │ │ │ shell=False, │ │ 334 │ │ │ │ │ │ │ │ │ │ stdout=subprocess.PIPE, │ │ ❱ 335 │ │ │ │ │ │ │ │ │ │ stderr=subprocess.PIPE │ │ 336 │ │ │ │ │ │ │ │ │ │ ) │ │ 337 │ │ │ │ while True: │ │ 338 │ │ │ │ │ output = blender_process.stdout.readline() │ │ │ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/subprocess.py:800 in │ │ init │ │ │ │ 797 │ │ │ │ │ │ │ │ p2cread, p2cwrite, │ │ 798 │ │ │ │ │ │ │ │ c2pread, c2pwrite, │ │ 799 │ │ │ │ │ │ │ │ errread, errwrite, │ │ ❱ 800 │ │ │ │ │ │ │ │ restore_signals, start_new_session) │ │ 801 │ │ except: │ │ 802 │ │ │ # Cleanup if the child failed starting. │ │ 803 │ │ │ for f in filter(None, (self.stdin, self.stdout, self.stde │ │ │ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/subprocess.py:1551 in │ │ _execute_child │ │ │ │ 1548 │ │ │ │ │ │ err_msg = os.strerror(errno_num) │ │ 1549 │ │ │ │ │ │ if errno_num == errno.ENOENT: │ │ 1550 │ │ │ │ │ │ │ err_msg += ': ' + repr(err_filename) │ │ ❱ 1551 │ │ │ │ │ raise child_exception_type(errno_num, err_msg, er │ │ 1552 │ │ │ │ raise child_exception_type(err_msg) │ │ 1553 │ │ 1554 │ ╰──────────────────────────────────────────────────────────────────────────────╯ FileNotFoundError: [Errno 2] No such file or directory: '/home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend --background --python /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/fre emocap/freemocap_blender_megascript.py -- /home/X/bac_mj_ar/FreeMocap_Data/sesh 2 0': '/home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend --background --python /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/fre emocap/freemocap_blender_megascript.py -- /home/X/bac_mj_ar/FreeMocap_Data/sesh 2 0'

    opened by DestroytheCity 13
  • macOS BigSur Support

    macOS BigSur Support

    Seems like the threading is the cause of the issue here.

    (freemocap) MacBook-Pro:freemocap sam$ python runme_FreeMoCap.py 
    Starting initialization for stage 1
      0%|                                                    | 0/20 [00:00<?, ?it/s]Oct  2 15:40:31  ThetaUVC_blender[33664] <Notice>: ------------ ThetaUVC_blender plugin start (version:2.0.1 built:Fri Aug 19 15:54:46 JST 2016 pid=33664 RELEASE). ------------ #thetauvc
      5%|██▏                                         | 1/20 [00:01<00:37,  1.98s/it]OpenCV: out device of bound (0-0): 1
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 2
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 3
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 4
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 5
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 6
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 7
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 8
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 9
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 10
    OpenCV: camera failed to properly initialize!
     55%|███████████████████████▋                   | 11/20 [00:02<00:01,  7.18it/s]OpenCV: out device of bound (0-0): 11
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 12
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 13
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 14
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 15
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 16
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 17
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 18
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 19
    OpenCV: camera failed to properly initialize!
    100%|███████████████████████████████████████████| 20/20 [00:02<00:00,  9.17it/s]
    2021-10-02 15:40:41.571 python[33664:261625353] WARNING: NSWindow drag regions should only be invalidated on the Main Thread! This will throw an exception in the future. Called from (
    	0   AppKit                              0x00007fff22b6ded1 -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 352
    	1   AppKit                              0x00007fff22b58aa2 -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1296
    	2   AppKit                              0x00007fff22b5858b -[NSWindow initWithContentRect:styleMask:backing:defer:] + 42
    	3   AppKit                              0x00007fff22e6283c -[NSWindow initWithContentRect:styleMask:backing:defer:screen:] + 52
    	4   cv2.cpython-37m-darwin.so           0x000000010ead0c94 cvNamedWindow + 564
    	5   cv2.cpython-37m-darwin.so           0x000000010eacdd1a _ZN2cv11namedWindowERKNS_6StringEi + 58
    	6   cv2.cpython-37m-darwin.so           0x000000010dc4a487 _ZL23pyopencv_cv_namedWindowP7_objectS0_S0_ + 231
    	7   python                              0x0000000104cd3f2b _PyMethodDef_RawFastCallKeywords + 395
    	8   python                              0x0000000104e0b9bb call_function + 251
    	9   python                              0x0000000104e034eb _PyEval_EvalFrameDefault + 20171
    	10  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	11  python                              0x0000000104e0b977 call_function + 183
    	12  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	13  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	14  python                              0x0000000104e0b977 call_function + 183
    	15  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	16  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	17  python                              0x0000000104e0b977 call_function + 183
    	18  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	19  python                              0x0000000104cd1fea _PyFunction_FastCallDict + 234
    	20  python                              0x0000000104cd66ba method_call + 122
    	21  python                              0x0000000104cd445f PyObject_Call + 127
    	22  python                              0x0000000104ef7e3a t_bootstrap + 122
    	23  python                              0x0000000104e7c764 pythread_wrapper + 36
    	24  libsystem_pthread.dylib             0x00007fff2031a8fc _pthread_start + 224
    	25  libsystem_pthread.dylib             0x00007fff20316443 thread_start + 15
    )
    __________________________________________
    cv2::videocapture properties for Camera# 0
    CV_CAP_PROP_FRAME_WIDTH: '1280.0'
    CV_CAP_PROP_FRAME_HEIGHT : '720.0'
    CAP_PROP_FPS : '30.0'
    CAP_PROP_EXPOSURE : '0.0'
    CAP_PROP_POS_MSEC : '0.0'
    CAP_PROP_FRAME_COUNT  : '0.0'
    CAP_PROP_BRIGHTNESS : '0.0'
    CAP_PROP_CONTRAST : '0.0'
    CAP_PROP_SATURATION : '0.0'
    CAP_PROP_HUE : '0.0'
    CAP_PROP_GAIN  : '0.0'
    CAP_PROP_CONVERT_RGB : '0.0'
    __________________________________________
    2021-10-02 15:40:42.898 python[33664:261625353] WARNING: nextEventMatchingMask should only be called from the Main Thread! This will throw an exception in the future.
    
    opened by kognat-docs 13
  • installation requirements or instructions

    installation requirements or instructions

    I have some undergrad students using this for a class project, and they ran into two installation issues that were easily solved, and should be solvable by either modifying the installation requirements, or just editing the instructions.

    All on windows.

    The first is that ipython is not automatically installed, so the start instructions fail. For those with new conda installations, after activating the env, conda install ipython is a simple fix.

    The second error comes after recording, with the end of the Traceback being:

    ~\Miniconda3\envs\freemocap\lib\site-packages\moviepy\video\io\ffmpeg_writer.py in __init__(self, filename, size, fps, codec, audiofile, preset, bitrate, withmask, logfile, threads, ffmpeg_params)
         86             '-s', '%dx%d' % (size[0], size[1]),
         87             '-pix_fmt', 'rgba' if withmask else 'rgb24',
    ---> 88             '-r', '%.02f' % fps,
         89             '-an', '-i', '-'
         90         ]
    
    TypeError: must be real number, not NoneType
    

    This is a problem with moviepy not finding ffmpeg. It's possible to install ffmpeg on windows and add to the path independently, but I prefer to install it in conda with conda install ffmpeg. However, moviepy can't find it, so after installing ffmpeg, running

    pip uninstall moviepy
    pip install moviepy
    

    works. I didn't have time to try to install ffmpeg first, but I think just adding it to the env creation line should fix the problem.

    opened by backyardbiomech 8
  • not enough values to unpack (expected 2, got 0)

    not enough values to unpack (expected 2, got 0)

    Hello,

    Could someone help me solve this problem, please! Thank you in advance for your help.

    Here is what displays to me, knowing that I used two webcam type cameras. image image image image

    opened by Ramdane-HACHOUR 7
  • FMC on Greyscale Videos

    FMC on Greyscale Videos

    Hello! We are running into an issue in which FMC seems to be swapping the skeletons generated from one camera onto the view of another camera (see video). Is this a potential result of using greyscale cameras? The calibration seems to work fine as the video generated reflects accurate skeletons, just placed on the wrong camera (ie camera 1 generates skeleton 1, but skeleton 1 is placed onto the view of camera 3). Has anyone encountered similar issues/have any suggestions on how to resolve this issue? Thank you in advance!

    https://user-images.githubusercontent.com/114196168/201419033-18be83ad-e852-4a82-b116-9fbfe470bec2.mp4

    opened by DestroytheCity 6
  • Need to raise a more informative Exception/Error when Charuco points not detected (and allow to continue if only 1 camera is selected)

    Need to raise a more informative Exception/Error when Charuco points not detected (and allow to continue if only 1 camera is selected)

    Currently, if no charuco points are detected, the code fails in this way -

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\__init__.py", line 124, in RunMe
        sesh, board
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\calibrate.py", line 84, in CalibrateCaptureVolume
        error,merged = cgroup.calibrate_videos(vidnames, board)
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\fmc_anipose.py", line 1740, in calibrate_videos
        **kwargs
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\fmc_anipose.py", line 1672, in calibrate_rows
        objp, imgp = zip(*mixed)
    ValueError: not enough values to unpack (expected 2, got 0)
    

    ...which should instead -

    • [ ] Raise an informative Error/Exception/Whatever thing
    • [ ] (Optional) If only one camera is selected, this is a warning. If more than one selected, this is an Error
    • [ ] (Optional) maybe in general there should be a 'single-camera' mode? so that people can just use this wrapper on their single webcam to play with OpenPose, MediaPipe, DLC, etc?
    opened by jonmatthis 6
  • Can't use multiple cameras on Ubuntu 20.04

    Can't use multiple cameras on Ubuntu 20.04

    Hi, thanks for all your work on this project. I'm excited to see where it goes.

    I'm having an issue where I can't seem to get this working with multiple cameras. I have three webcams plugged directly into my laptop (I also tried plugging into a hub, but didn't change anything).

    When I go through the camera setup process and click "Submit", only the first camera lights up and only a blank rectangular window appears.

    This is the output I see:

    __________________________________________
    cv2::videocapture properties for Camera# 0
    CV_CAP_PROP_FRAME_WIDTH: '640.0'
    CV_CAP_PROP_FRAME_HEIGHT : '480.0'
    CAP_PROP_FPS : '30.0'
    CAP_PROP_EXPOSURE : '0.008200820082008202'
    CAP_PROP_POS_MSEC : '0.0'
    CAP_PROP_FRAME_COUNT  : '-1.0'
    CAP_PROP_BRIGHTNESS : '0.5019607843137255'
    CAP_PROP_CONTRAST : '0.12549019607843137'
    CAP_PROP_SATURATION : '0.12549019607843137'
    CAP_PROP_HUE : '-1.0'
    CAP_PROP_GAIN  : '0.20392156862745098'
    CAP_PROP_CONVERT_RGB : '1.0'
    __________________________________________
    

    It looks like it's getting stuck on the first camera and never manages to start up the other cameras?

    It works fine if I select just one camera.

    opened by frnsys 4
  • SPIKE: Look into HackMD / MkDocs Integration for Knowledge Base

    SPIKE: Look into HackMD / MkDocs Integration for Knowledge Base

    Workflow:

    1. Use mkdocs to create our knowledge base skin
    2. Use HackMd with MkDocs configuration protocols to write KB articles

    Notes: https://www.mkdocs.org/getting-started/

    PR #212

    opened by endurance 4
  • list index out or range on stage 3

    list index out or range on stage 3

    Hey, I've followed the setup but I'm having the following on Stage 3 - Calibrate Capture Volume: list index out of range.

    Here are some screenshots of the error: Image1 Image2.

    Thanks a lot for any help.

    opened by tomazsaraiva 4
  • Pre-recorded MP4s are not recognized by alpha GUI on Mac/Linux

    Pre-recorded MP4s are not recognized by alpha GUI on Mac/Linux

    Following the process in the documentation for processing previously recorded videos, videos with the file extension MP4 are not recognized on Mac/Linux. The non-capitalized variant mp4 is recognized by Mac/Linux, and both cases should be recognized by windows (although I can't test this personally). The case sensitivity issue is due to the changing behavior of glob.glob() across operating systems, as explained here.

    To resolve this issue, the file search should either be made case insensitive on Mac/Linux to match the Windows behavior, or case sensitive on Windows to match the Mac/Linux behavior (the linked article above describes a function to make glob case sensitive on Windows).

    opened by philipqueen 0
  • Question Regarding Output Files

    Question Regarding Output Files

    Have successfully gotten FMC off the ground for our project, but we had some questions regarding the output files produced.

    1. In the Mediapipe_body+3d+xyz.csv file produced via the Alpha GUI, what is the unit for time? We see that our videos are roughly 19 seconds long and contain 2063 frames, yet we have 1753 measurements of each key point. Is this the number when all cameras are active and tracking?
    2. How are the x,y,z coordinates established? Are they consistent between videos using the same calibration or different calibrations within the same hallway?

    EDIT: For the Mediapipe_body+3d+xyz.csv, also wondering what the coordinates represent/what the unit for each measurement is.

    opened by DestroytheCity 4
  • Set timer to start and stop recording

    Set timer to start and stop recording

    Life improvement:

    What? Make an option to start and stop a recording after a set duration.

    e.g. pres button, recording starts after 5 seconds and ends 30 seconds after that.

    Why? Because it makes recording alone easier (pres button, walk to recording volume, record) Because it can help standardize patient recordings (e.g. 30 seconds sit-to-stand test)

    BONUS POINTS if FreeMoCap plays a sound when the recordings start and stop.

    opened by steenharsted 0
  • Chose camera to use for orientation and start of global coordinate system

    Chose camera to use for orientation and start of global coordinate system

    Down the line, users should be able to control the start and orientation of the global coordinate system using the Charuco board (https://github.com/freemocap/freemocap/issues/282).

    Until that is implemented, a workable fix could be to allow users to select what camera is being used for the origin and orientation of the global coordinate system.

    This will allow users to have some control over the global coordinate system by having one camera set at a specific height and with a level orientation.

    It's not perfect, but it would be a great improvement.

    opened by steenharsted 0
  • Set Floor with Charuco board (global coordinate system)

    Set Floor with Charuco board (global coordinate system)

    Add a final option during calibration, "Set Floor"

    Place the charuco board on the floor and use the middle (or a set corner) as 0,0,0 in the global coordinate system. Use the orientation of the charuco board to assign x and z axis (y - axis will straight up from the board).

    This would greatly improve FreeMoCap's use in biomechanical settings.

    opened by steenharsted 1
  • Add sample `.blend` file output to repo

    Add sample `.blend` file output to repo

    Adding a sample of the blender output somewhere in the repository (or in the documentation...) would be great for helping folks see what freemocap produces!

    Minor 
    opened by trentwirth 0
Releases(v0.0.54)
  • v0.0.54(Jul 16, 2022)

    This is the Freemocap Pre-Alpha Release to create raw 3d skeletons from USB connected webcams

    Here is the relevant README for this version of freemocap v0.0.54: https://github.com/freemocap/freemocap/blob/main/OLD_README.md

    Source code(tar.gz)
    Source code(zip)
PyTorch implementation for paper Neural Marching Cubes.

NMC PyTorch implementation for paper Neural Marching Cubes, Zhiqin Chen, Hao Zhang. Paper | Supplementary Material (to be updated) Citation If you fin

Zhiqin Chen 109 Dec 27, 2022
NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

NVIDIA Merlin NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs. It enables data scientists, machine

419 Jan 03, 2023
Code for PackNet: Adding Multiple Tasks to a Single Network by Iterative Pruning

PackNet: https://arxiv.org/abs/1711.05769 Pretrained models are available here: https://uofi.box.com/s/zap2p03tnst9dfisad4u0sfupc0y1fxt Datasets in Py

Arun Mallya 216 Jan 05, 2023
FPSAutomaticAiming——基于YOLOV5的FPS类游戏自动瞄准AI

FPSAutomaticAiming——基于YOLOV5的FPS类游戏自动瞄准AI 声明: 本项目仅限于学习交流,不可用于非法用途,包括但不限于:用于游戏外挂等,使用本项目产生的任何后果与本人无关! 简介 本项目基于yolov5,实现了一款FPS类游戏(CF、CSGO等)的自瞄AI,本项目旨在使用现

Fabian 246 Dec 28, 2022
Azua - build AI algorithms to aid efficient decision-making with minimum data requirements.

Project Azua 0. Overview Many modern AI algorithms are known to be data-hungry, whereas human decision-making is much more efficient. The human can re

Microsoft 197 Jan 06, 2023
A collection of educational notebooks on multi-view geometry and computer vision.

Multiview notebooks This is a collection of educational notebooks on multi-view geometry and computer vision. Subjects covered in these notebooks incl

Max 65 Dec 09, 2022
A toy compiler that can convert Python scripts to pickle bytecode 🥒

Pickora 🐰 A small compiler that can convert Python scripts to pickle bytecode. Requirements Python 3.8+ No third-party modules are required. Usage us

ꌗᖘ꒒ꀤ꓄꒒ꀤꈤꍟ 68 Jan 04, 2023
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.

SAFA: Structure Aware Face Animation (3DV2021) Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation. Getting Started

QiulinW 122 Dec 23, 2022
Code release for Hu et al. Segmentation from Natural Language Expressions. in ECCV, 2016

Segmentation from Natural Language Expressions This repository contains the code for the following paper: R. Hu, M. Rohrbach, T. Darrell, Segmentation

Ronghang Hu 88 May 24, 2022
Implementation of "JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting"

JOKR: Joint Keypoint Representation for Unsupervised Cross-Domain Motion Retargeting Pytorch implementation for the paper "JOKR: Joint Keypoint Repres

45 Dec 25, 2022
[Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021

Convolutional MLP ConvMLP: Hierarchical Convolutional MLPs for Vision Preprint link: ConvMLP: Hierarchical Convolutional MLPs for Vision By Jiachen Li

SHI Lab 143 Jan 03, 2023
Implementation of the SUMO (Slim U-Net trained on MODA) model

SUMO - Slim U-Net trained on MODA Implementation of the SUMO (Slim U-Net trained on MODA) model as described in: TODO: add reference to paper once ava

6 Nov 19, 2022
Mitsuba 2: A Retargetable Forward and Inverse Renderer

Mitsuba Renderer 2 Documentation Mitsuba 2 is a research-oriented rendering system written in portable C++17. It consists of a small set of core libra

Mitsuba Physically Based Renderer 2k Jan 07, 2023
Tool which allow you to detect and translate text.

Text detection and recognition This repository contains tool which allow to detect region with text and translate it one by one. Description Two pretr

Damian Panek 176 Nov 28, 2022
Generalized Data Weighting via Class-level Gradient Manipulation

Generalized Data Weighting via Class-level Gradient Manipulation This repository is the official implementation of Generalized Data Weighting via Clas

18 Nov 12, 2022
Dense Gaussian Processes for Few-Shot Segmentation

DGPNet - Dense Gaussian Processes for Few-Shot Segmentation Welcome to the public repository for DGPNet. The paper is available at arxiv: https://arxi

37 Jan 07, 2023
The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

The trained model and denoising example for paper : Cardiopulmonary Auscultation Enhancement with a Two-Stage Noise Cancellation Approach

ycj_project 1 Jan 18, 2022
This repository is the official implementation of Open Rule Induction. This paper has been accepted to NeurIPS 2021.

Open Rule Induction This repository is the official implementation of Open Rule Induction. This paper has been accepted to NeurIPS 2021. Abstract Rule

Xingran Chen 16 Nov 14, 2022
minimizer-space de Bruijn graphs (mdBG) for whole genome assembly

rust-mdbg: Minimizer-space de Bruijn graphs (mdBG) for whole-genome assembly rust-mdbg is an ultra-fast minimizer-space de Bruijn graph (mdBG) impleme

Barış Ekim 148 Dec 01, 2022
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021