Free like Freedom

Overview

This is all very much a work in progress! More to come!

( We're working on it though! Stay tuned!)

Installation

  • Open an Anaconda Prompt (in Windows, or any terminal on Mac/Linux) and enter the following comands

conda create -n freemocap-env python=3.7

conda activate freemocap-env

pip install freemocap -v

ipython

import freemocap as fmc
fmc.RunMe() #this is where the magic happens.
2021-06-12_FreeMoCap_Clips_16MB.mp4

Prerequisites -

Required

  • A Python 3.7 environment: We recommend installing Anaconda from here (https://www.anaconda.com/products/individual#Downloads) to create your Python environment.

  • Two or more USB webcams attached to viable USB ports

    • (USB hubs typically don't work)
  • Each recording must (for now) start with an unobstructed view of a Charuco board generated with python commands (or equivalent):

     import cv2
     
     aruco_dict = cv2.aruco.Dictionary_get(cv2.aruco.DICT_4X4_250) #note `cv2.aruco` can be installed via `pip install opencv-contrib-python`
     
     board = cv2.aruco.CharucoBoard_create(7, 5, 1, .8, aruco_dict)
     
     charuco_board_image = board.draw((2000,2000)) #`2000` is the resolution of the resulting image. Increase this number if printing a large board (bigger is better! Esp for large spaces!
     
     cv2.imwrite('charuco_board_image.png',charuco_board_image)
     
    

Optional If you would like to use OpenPose for body tracking, install Cude and the Windows Portable Demo of OpenPose.

Follow the GitHub Repository and/or Join the Discord (https://discord.gg/HX7MTprYsK) for updates!

Stay Tuned for more soon!

Comments
  • Ubuntu Support?

    Ubuntu Support?

    The following diffs were needed to make it work under Linux

    +++ b/freemocap/webcam/camsetup.py
    @@ -21,7 +21,7 @@ class VideoSetup(threading.Thread):
             camName = "Camera" + str(self.camID)
     
             cv2.namedWindow(camName)
    -        cap = cv2.VideoCapture(self.camID, cv2.CAP_DSHOW)
    +        cap = cv2.VideoCapture(self.camID, cv2.CAP_ANY)
             cap.set(cv2.CAP_PROP_FRAME_WIDTH, resWidth)
             cap.set(cv2.CAP_PROP_FRAME_HEIGHT, resHeight)
             cap.set(cv2.CAP_PROP_EXPOSURE, exposure)
    diff --git a/freemocap/webcam/checkcams.py b/freemocap/webcam/checkcams.py
    index dda1348..fb4a6a0 100644
    --- a/freemocap/webcam/checkcams.py
    +++ b/freemocap/webcam/checkcams.py
    @@ -3,7 +3,7 @@ import cv2
     
     
     def TestDevice(source):
    -    cap = cv2.VideoCapture(source, cv2.CAP_DSHOW)
    +    cap = cv2.VideoCapture(source, cv2.CAP_ANY)
         # if cap is None or not cap.isOpened():
         # print('Warning: unable to open video source: ', source)
     
    diff --git a/freemocap/webcam/startcamrecording.py b/freemocap/webcam/startcamrecording.py
    index a0cdec1..011d853 100644
    --- a/freemocap/webcam/startcamrecording.py
    +++ b/freemocap/webcam/startcamrecording.py
    @@ -44,7 +44,7 @@ def CamRecording(
         flag = False
     
         cv2.namedWindow(camID)  # name the preview window for the camera its showing
    -    cam = cv2.VideoCapture(camInput, cv2.CAP_DSHOW)  # create the video capture object
    +    cam = cv2.VideoCapture(camInput, cv2.CAP_ANY)  # create the video capture object
         # if not cam.isOpened():
         #         raise RuntimeError('No camera found at input '+ str(camID))
         # pulling out all the dictionary paramters
    
    

    But while everything works with one camera at a time, it seems like it blocks with two identical cameras

    The error manifests itself outside of your code

    I get

    [  806.191512] uvcvideo: Failed to query (SET_CUR) UVC control 4 on unit 1: -32 (exp. 4).
    

    Here is lsusb

    [email protected]:~/Desktop$ lsusb 
    Bus 001 Device 005: ID 046d:09a4 Logitech, Inc. QuickCam E 3500
    Bus 001 Device 004: ID 046d:09a4 Logitech, Inc. QuickCam E 3500
    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
    Bus 004 Device 002: ID 046d:c542 Logitech, Inc. Wireless Receiver
    Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    Bus 002 Device 004: ID 060b:7a16 Solid Year MD800
    Bus 002 Device 003: ID 0627:0001 Adomax Technology Co., Ltd QEMU USB Hub
    Bus 002 Device 002: ID 0409:55aa NEC Corp. Hub
    Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
    

    do each of the cameras need to be on a separate bus?

    enhancement question 
    opened by kognat-docs 27
  • Multi Camera Calibration Failure

    Multi Camera Calibration Failure

    My team and I are attempting to use FreeMoCap for a system that records eight views in synchrony. We are currently having issues calibrating all eight cameras, which face opposing directions in our hallway set-up, making it difficult for them to see one CharUco board at once. To resolve this, we printed a double-sided CharUco board, allowing us to create a 3D skeleton output; however, the skeleton seems to be both inverted and translated compared to the subject from certain camera angles. We then tried to calibrate our system using hallway cameras on one side, which all face each other and can observe a charUco board simultaneously. Still, we saw the output skeleton as translated and inverted, ruling out the double-sided board as the reason for the translation. Any suggestions on how to calibrate a system set up like this or why the skeletons seem to be inverted/displaced?

    Attached is a link with the videos and data array/calibration files for the 8 camera set-up. https://drive.google.com/drive/folders/1wv4RHzPFDLeIXr9xONC72fhbW3MKQT_q?usp=sharing

    opened by DestroytheCity 18
  • Using Blender from Wrong Session

    Using Blender from Wrong Session

    I started a new FMC session, yet when running Stage 5 FMC keeps using the .blend from my original project instead of the new one i made for it in the right session folder. It then fails saying that the file doesn't exist. Any thoughts? *replaced login with X

    Running sesh 2 from /home/X/bac_mj_ar/FreeMocap_Data ──────────────────────────────────────────────────────────────────────────────── Using blender executable located at: /home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend Skipping Video Recording Skipping Video Syncing Skipping Calibration Skipping 2d point tracking ──────────────────────────────────────────────────────────────────────────────── ────────────────────────────── EXPORTING FILES... ────────────────────────────── ─ Hijacking Blender's file format converters to export FreeMoCap data as vari… ─ ──────────────────────────────────────────────────────────────────────────────── ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/freemoca │ │ p/fmc_runme.py:335 in RunMe │ │ │ │ 332 │ │ │ │ │ │ │ │ │ │ command_str, │ │ 333 │ │ │ │ │ │ │ │ │ │ shell=False, │ │ 334 │ │ │ │ │ │ │ │ │ │ stdout=subprocess.PIPE, │ │ ❱ 335 │ │ │ │ │ │ │ │ │ │ stderr=subprocess.PIPE │ │ 336 │ │ │ │ │ │ │ │ │ │ ) │ │ 337 │ │ │ │ while True: │ │ 338 │ │ │ │ │ output = blender_process.stdout.readline() │ │ │ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/subprocess.py:800 in │ │ init │ │ │ │ 797 │ │ │ │ │ │ │ │ p2cread, p2cwrite, │ │ 798 │ │ │ │ │ │ │ │ c2pread, c2pwrite, │ │ 799 │ │ │ │ │ │ │ │ errread, errwrite, │ │ ❱ 800 │ │ │ │ │ │ │ │ restore_signals, start_new_session) │ │ 801 │ │ except: │ │ 802 │ │ │ # Cleanup if the child failed starting. │ │ 803 │ │ │ for f in filter(None, (self.stdin, self.stdout, self.stde │ │ │ │ /home/X/.conda/envs/freemocap-env/lib/python3.7/subprocess.py:1551 in │ │ _execute_child │ │ │ │ 1548 │ │ │ │ │ │ err_msg = os.strerror(errno_num) │ │ 1549 │ │ │ │ │ │ if errno_num == errno.ENOENT: │ │ 1550 │ │ │ │ │ │ │ err_msg += ': ' + repr(err_filename) │ │ ❱ 1551 │ │ │ │ │ raise child_exception_type(errno_num, err_msg, er │ │ 1552 │ │ │ │ raise child_exception_type(err_msg) │ │ 1553 │ │ 1554 │ ╰──────────────────────────────────────────────────────────────────────────────╯ FileNotFoundError: [Errno 2] No such file or directory: '/home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend --background --python /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/fre emocap/freemocap_blender_megascript.py -- /home/X/bac_mj_ar/FreeMocap_Data/sesh 2 0': '/home/X/bac_mj_ar/FreeMocap_Data/sesh/Session_ID.blend --background --python /home/X/.conda/envs/freemocap-env/lib/python3.7/site-packages/fre emocap/freemocap_blender_megascript.py -- /home/X/bac_mj_ar/FreeMocap_Data/sesh 2 0'

    opened by DestroytheCity 13
  • macOS BigSur Support

    macOS BigSur Support

    Seems like the threading is the cause of the issue here.

    (freemocap) MacBook-Pro:freemocap sam$ python runme_FreeMoCap.py 
    Starting initialization for stage 1
      0%|                                                    | 0/20 [00:00<?, ?it/s]Oct  2 15:40:31  ThetaUVC_blender[33664] <Notice>: ------------ ThetaUVC_blender plugin start (version:2.0.1 built:Fri Aug 19 15:54:46 JST 2016 pid=33664 RELEASE). ------------ #thetauvc
      5%|██▏                                         | 1/20 [00:01<00:37,  1.98s/it]OpenCV: out device of bound (0-0): 1
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 2
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 3
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 4
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 5
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 6
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 7
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 8
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 9
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 10
    OpenCV: camera failed to properly initialize!
     55%|███████████████████████▋                   | 11/20 [00:02<00:01,  7.18it/s]OpenCV: out device of bound (0-0): 11
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 12
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 13
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 14
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 15
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 16
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 17
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 18
    OpenCV: camera failed to properly initialize!
    OpenCV: out device of bound (0-0): 19
    OpenCV: camera failed to properly initialize!
    100%|███████████████████████████████████████████| 20/20 [00:02<00:00,  9.17it/s]
    2021-10-02 15:40:41.571 python[33664:261625353] WARNING: NSWindow drag regions should only be invalidated on the Main Thread! This will throw an exception in the future. Called from (
    	0   AppKit                              0x00007fff22b6ded1 -[NSWindow(NSWindow_Theme) _postWindowNeedsToResetDragMarginsUnlessPostingDisabled] + 352
    	1   AppKit                              0x00007fff22b58aa2 -[NSWindow _initContent:styleMask:backing:defer:contentView:] + 1296
    	2   AppKit                              0x00007fff22b5858b -[NSWindow initWithContentRect:styleMask:backing:defer:] + 42
    	3   AppKit                              0x00007fff22e6283c -[NSWindow initWithContentRect:styleMask:backing:defer:screen:] + 52
    	4   cv2.cpython-37m-darwin.so           0x000000010ead0c94 cvNamedWindow + 564
    	5   cv2.cpython-37m-darwin.so           0x000000010eacdd1a _ZN2cv11namedWindowERKNS_6StringEi + 58
    	6   cv2.cpython-37m-darwin.so           0x000000010dc4a487 _ZL23pyopencv_cv_namedWindowP7_objectS0_S0_ + 231
    	7   python                              0x0000000104cd3f2b _PyMethodDef_RawFastCallKeywords + 395
    	8   python                              0x0000000104e0b9bb call_function + 251
    	9   python                              0x0000000104e034eb _PyEval_EvalFrameDefault + 20171
    	10  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	11  python                              0x0000000104e0b977 call_function + 183
    	12  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	13  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	14  python                              0x0000000104e0b977 call_function + 183
    	15  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	16  python                              0x0000000104cd3b15 _PyFunction_FastCallKeywords + 229
    	17  python                              0x0000000104e0b977 call_function + 183
    	18  python                              0x0000000104e03462 _PyEval_EvalFrameDefault + 20034
    	19  python                              0x0000000104cd1fea _PyFunction_FastCallDict + 234
    	20  python                              0x0000000104cd66ba method_call + 122
    	21  python                              0x0000000104cd445f PyObject_Call + 127
    	22  python                              0x0000000104ef7e3a t_bootstrap + 122
    	23  python                              0x0000000104e7c764 pythread_wrapper + 36
    	24  libsystem_pthread.dylib             0x00007fff2031a8fc _pthread_start + 224
    	25  libsystem_pthread.dylib             0x00007fff20316443 thread_start + 15
    )
    __________________________________________
    cv2::videocapture properties for Camera# 0
    CV_CAP_PROP_FRAME_WIDTH: '1280.0'
    CV_CAP_PROP_FRAME_HEIGHT : '720.0'
    CAP_PROP_FPS : '30.0'
    CAP_PROP_EXPOSURE : '0.0'
    CAP_PROP_POS_MSEC : '0.0'
    CAP_PROP_FRAME_COUNT  : '0.0'
    CAP_PROP_BRIGHTNESS : '0.0'
    CAP_PROP_CONTRAST : '0.0'
    CAP_PROP_SATURATION : '0.0'
    CAP_PROP_HUE : '0.0'
    CAP_PROP_GAIN  : '0.0'
    CAP_PROP_CONVERT_RGB : '0.0'
    __________________________________________
    2021-10-02 15:40:42.898 python[33664:261625353] WARNING: nextEventMatchingMask should only be called from the Main Thread! This will throw an exception in the future.
    
    opened by kognat-docs 13
  • installation requirements or instructions

    installation requirements or instructions

    I have some undergrad students using this for a class project, and they ran into two installation issues that were easily solved, and should be solvable by either modifying the installation requirements, or just editing the instructions.

    All on windows.

    The first is that ipython is not automatically installed, so the start instructions fail. For those with new conda installations, after activating the env, conda install ipython is a simple fix.

    The second error comes after recording, with the end of the Traceback being:

    ~\Miniconda3\envs\freemocap\lib\site-packages\moviepy\video\io\ffmpeg_writer.py in __init__(self, filename, size, fps, codec, audiofile, preset, bitrate, withmask, logfile, threads, ffmpeg_params)
         86             '-s', '%dx%d' % (size[0], size[1]),
         87             '-pix_fmt', 'rgba' if withmask else 'rgb24',
    ---> 88             '-r', '%.02f' % fps,
         89             '-an', '-i', '-'
         90         ]
    
    TypeError: must be real number, not NoneType
    

    This is a problem with moviepy not finding ffmpeg. It's possible to install ffmpeg on windows and add to the path independently, but I prefer to install it in conda with conda install ffmpeg. However, moviepy can't find it, so after installing ffmpeg, running

    pip uninstall moviepy
    pip install moviepy
    

    works. I didn't have time to try to install ffmpeg first, but I think just adding it to the env creation line should fix the problem.

    opened by backyardbiomech 8
  • not enough values to unpack (expected 2, got 0)

    not enough values to unpack (expected 2, got 0)

    Hello,

    Could someone help me solve this problem, please! Thank you in advance for your help.

    Here is what displays to me, knowing that I used two webcam type cameras. image image image image

    opened by Ramdane-HACHOUR 7
  • FMC on Greyscale Videos

    FMC on Greyscale Videos

    Hello! We are running into an issue in which FMC seems to be swapping the skeletons generated from one camera onto the view of another camera (see video). Is this a potential result of using greyscale cameras? The calibration seems to work fine as the video generated reflects accurate skeletons, just placed on the wrong camera (ie camera 1 generates skeleton 1, but skeleton 1 is placed onto the view of camera 3). Has anyone encountered similar issues/have any suggestions on how to resolve this issue? Thank you in advance!

    https://user-images.githubusercontent.com/114196168/201419033-18be83ad-e852-4a82-b116-9fbfe470bec2.mp4

    opened by DestroytheCity 6
  • Need to raise a more informative Exception/Error when Charuco points not detected (and allow to continue if only 1 camera is selected)

    Need to raise a more informative Exception/Error when Charuco points not detected (and allow to continue if only 1 camera is selected)

    Currently, if no charuco points are detected, the code fails in this way -

    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\__init__.py", line 124, in RunMe
        sesh, board
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\calibrate.py", line 84, in CalibrateCaptureVolume
        error,merged = cgroup.calibrate_videos(vidnames, board)
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\fmc_anipose.py", line 1740, in calibrate_videos
        **kwargs
      File "C:\Users\jonma\Dropbox\GitKrakenRepos\freemocap\freemocap\fmc_anipose.py", line 1672, in calibrate_rows
        objp, imgp = zip(*mixed)
    ValueError: not enough values to unpack (expected 2, got 0)
    

    ...which should instead -

    • [ ] Raise an informative Error/Exception/Whatever thing
    • [ ] (Optional) If only one camera is selected, this is a warning. If more than one selected, this is an Error
    • [ ] (Optional) maybe in general there should be a 'single-camera' mode? so that people can just use this wrapper on their single webcam to play with OpenPose, MediaPipe, DLC, etc?
    opened by jonmatthis 6
  • Can't use multiple cameras on Ubuntu 20.04

    Can't use multiple cameras on Ubuntu 20.04

    Hi, thanks for all your work on this project. I'm excited to see where it goes.

    I'm having an issue where I can't seem to get this working with multiple cameras. I have three webcams plugged directly into my laptop (I also tried plugging into a hub, but didn't change anything).

    When I go through the camera setup process and click "Submit", only the first camera lights up and only a blank rectangular window appears.

    This is the output I see:

    __________________________________________
    cv2::videocapture properties for Camera# 0
    CV_CAP_PROP_FRAME_WIDTH: '640.0'
    CV_CAP_PROP_FRAME_HEIGHT : '480.0'
    CAP_PROP_FPS : '30.0'
    CAP_PROP_EXPOSURE : '0.008200820082008202'
    CAP_PROP_POS_MSEC : '0.0'
    CAP_PROP_FRAME_COUNT  : '-1.0'
    CAP_PROP_BRIGHTNESS : '0.5019607843137255'
    CAP_PROP_CONTRAST : '0.12549019607843137'
    CAP_PROP_SATURATION : '0.12549019607843137'
    CAP_PROP_HUE : '-1.0'
    CAP_PROP_GAIN  : '0.20392156862745098'
    CAP_PROP_CONVERT_RGB : '1.0'
    __________________________________________
    

    It looks like it's getting stuck on the first camera and never manages to start up the other cameras?

    It works fine if I select just one camera.

    opened by frnsys 4
  • SPIKE: Look into HackMD / MkDocs Integration for Knowledge Base

    SPIKE: Look into HackMD / MkDocs Integration for Knowledge Base

    Workflow:

    1. Use mkdocs to create our knowledge base skin
    2. Use HackMd with MkDocs configuration protocols to write KB articles

    Notes: https://www.mkdocs.org/getting-started/

    PR #212

    opened by endurance 4
  • list index out or range on stage 3

    list index out or range on stage 3

    Hey, I've followed the setup but I'm having the following on Stage 3 - Calibrate Capture Volume: list index out of range.

    Here are some screenshots of the error: Image1 Image2.

    Thanks a lot for any help.

    opened by tomazsaraiva 4
  • Pre-recorded MP4s are not recognized by alpha GUI on Mac/Linux

    Pre-recorded MP4s are not recognized by alpha GUI on Mac/Linux

    Following the process in the documentation for processing previously recorded videos, videos with the file extension MP4 are not recognized on Mac/Linux. The non-capitalized variant mp4 is recognized by Mac/Linux, and both cases should be recognized by windows (although I can't test this personally). The case sensitivity issue is due to the changing behavior of glob.glob() across operating systems, as explained here.

    To resolve this issue, the file search should either be made case insensitive on Mac/Linux to match the Windows behavior, or case sensitive on Windows to match the Mac/Linux behavior (the linked article above describes a function to make glob case sensitive on Windows).

    opened by philipqueen 0
  • Question Regarding Output Files

    Question Regarding Output Files

    Have successfully gotten FMC off the ground for our project, but we had some questions regarding the output files produced.

    1. In the Mediapipe_body+3d+xyz.csv file produced via the Alpha GUI, what is the unit for time? We see that our videos are roughly 19 seconds long and contain 2063 frames, yet we have 1753 measurements of each key point. Is this the number when all cameras are active and tracking?
    2. How are the x,y,z coordinates established? Are they consistent between videos using the same calibration or different calibrations within the same hallway?

    EDIT: For the Mediapipe_body+3d+xyz.csv, also wondering what the coordinates represent/what the unit for each measurement is.

    opened by DestroytheCity 4
  • Set timer to start and stop recording

    Set timer to start and stop recording

    Life improvement:

    What? Make an option to start and stop a recording after a set duration.

    e.g. pres button, recording starts after 5 seconds and ends 30 seconds after that.

    Why? Because it makes recording alone easier (pres button, walk to recording volume, record) Because it can help standardize patient recordings (e.g. 30 seconds sit-to-stand test)

    BONUS POINTS if FreeMoCap plays a sound when the recordings start and stop.

    opened by steenharsted 0
  • Chose camera to use for orientation and start of global coordinate system

    Chose camera to use for orientation and start of global coordinate system

    Down the line, users should be able to control the start and orientation of the global coordinate system using the Charuco board (https://github.com/freemocap/freemocap/issues/282).

    Until that is implemented, a workable fix could be to allow users to select what camera is being used for the origin and orientation of the global coordinate system.

    This will allow users to have some control over the global coordinate system by having one camera set at a specific height and with a level orientation.

    It's not perfect, but it would be a great improvement.

    opened by steenharsted 0
  • Set Floor with Charuco board (global coordinate system)

    Set Floor with Charuco board (global coordinate system)

    Add a final option during calibration, "Set Floor"

    Place the charuco board on the floor and use the middle (or a set corner) as 0,0,0 in the global coordinate system. Use the orientation of the charuco board to assign x and z axis (y - axis will straight up from the board).

    This would greatly improve FreeMoCap's use in biomechanical settings.

    opened by steenharsted 1
  • Add sample `.blend` file output to repo

    Add sample `.blend` file output to repo

    Adding a sample of the blender output somewhere in the repository (or in the documentation...) would be great for helping folks see what freemocap produces!

    Minor 
    opened by trentwirth 0
Releases(v0.0.54)
  • v0.0.54(Jul 16, 2022)

    This is the Freemocap Pre-Alpha Release to create raw 3d skeletons from USB connected webcams

    Here is the relevant README for this version of freemocap v0.0.54: https://github.com/freemocap/freemocap/blob/main/OLD_README.md

    Source code(tar.gz)
    Source code(zip)
A dead simple python wrapper for darknet that works with OpenCV 4.1, CUDA 10.1

What Dead simple python wrapper for Yolo V3 using AlexyAB's darknet fork. Works with CUDA 10.1 and OpenCV 4.1 or later (I use OpenCV master as of Jun

Pliable Pixels 6 Jan 12, 2022
Face Mask Detection System built with OpenCV, TensorFlow using Computer Vision concepts

Face mask detection Face Mask Detection System built with OpenCV, TensorFlow using Computer Vision concepts in order to detect face masks in static im

Vaibhav Shukla 1 Oct 27, 2021
PyTorch reimplementation of the paper Involution: Inverting the Inherence of Convolution for Visual Recognition [CVPR 2021].

Involution: Inverting the Inherence of Convolution for Visual Recognition Unofficial PyTorch reimplementation of the paper Involution: Inverting the I

Christoph Reich 100 Dec 01, 2022
Reinforcement learning library in JAX.

Reinforcement learning library in JAX.

Yicheng Luo 96 Oct 30, 2022
Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search

CLIP-GLaSS Repository for the paper Generating images from caption and vice versa via CLIP-Guided Generative Latent Space Search An in-browser demo is

Federico Galatolo 172 Dec 22, 2022
Finite-temperature variational Monte Carlo calculation of uniform electron gas using neural canonical transformation.

CoulombGas This code implements the neural canonical transformation approach to the thermodynamic properties of uniform electron gas. Building on JAX,

FermiFlow 9 Mar 03, 2022
Official codebase for ICLR oral paper Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling

CLIORA This is the official codebase for ICLR oral paper: Unsupervised Vision-Language Grammar Induction with Shared Structure Modeling. We introduce

Bo Wan 32 Dec 23, 2022
UnpNet - Rethinking 3-D LiDAR Point Cloud Segmentation(IEEE TNNLS)

UnpNet Citation Please cite the following paper if you use this repository in your reseach. @article {PMID:34914599, Title = {Rethinking 3-D LiDAR Po

Shijie Li 4 Jul 15, 2022
Implementation of Memory-Compressed Attention, from the paper "Generating Wikipedia By Summarizing Long Sequences"

Memory Compressed Attention Implementation of the Self-Attention layer of the proposed Memory-Compressed Attention, in Pytorch. This repository offers

Phil Wang 47 Dec 23, 2022
A modern pure-Python library for reading PDF files

pdf A modern pure-Python library for reading PDF files. The goal is to have a modern interface to handle PDF files which is consistent with itself and

6 Apr 06, 2022
Landmarks Recogntion Web application using Streamlit.

Landmark Recognition Web-App using Streamlit Watch Tutorial for this project Source Trained model landmarks_classifier_asia_V1/1 is taken from the Ten

Kushal Bhavsar 5 Dec 12, 2022
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

632 Dec 13, 2022
A PyTorch implementation of the Transformer model in "Attention is All You Need".

Attention is all you need: A Pytorch Implementation This is a PyTorch implementation of the Transformer model in "Attention is All You Need" (Ashish V

Yu-Hsiang Huang 7.1k Jan 04, 2023
Joint Discriminative and Generative Learning for Person Re-identification. CVPR'19 (Oral)

Joint Discriminative and Generative Learning for Person Re-identification [Project] [Paper] [YouTube] [Bilibili] [Poster] [Supp] Joint Discriminative

NVIDIA Research Projects 1.2k Dec 30, 2022
Implementing DropPath/StochasticDepth in PyTorch

%load_ext memory_profiler Implementing Stochastic Depth/Drop Path In PyTorch DropPath is available on glasses my computer vision library! Introduction

Francesco Saverio Zuppichini 13 Jan 05, 2023
Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing

EGFNet Edge-aware Guidance Fusion Network for RGB-Thermal Scene Parsing Dataset and Results Test maps: 百度网盘 提取码:zust Citation @ARTICLE{ author={Zhou,

ShaohuaDong 10 Dec 08, 2022
Parsing, analyzing, and comparing source code across many languages

Semantic semantic is a Haskell library and command line tool for parsing, analyzing, and comparing source code. In a hurry? Check out our documentatio

GitHub 8.6k Dec 28, 2022
Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks"

HKD Code for ICCV 2021 paper "Distilling Holistic Knowledge with Graph Neural Networks" cifia-100 result The implementation of compared methods are ba

Wang Yucheng 30 Dec 18, 2022
《Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching》(CVPR 2020)

This contains the codes for cross-view geo-localization method described in: Where am I looking at? Joint Location and Orientation Estimation by Cross-View Matching, CVPR2020.

41 Oct 27, 2022
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022