TrackTech: Real-time tracking of subjects and objects on multiple cameras

Overview

TrackTech: Real-time tracking of subjects and objects on multiple cameras

Forwarder Build Interface Build Orchestrator Build Processor Build

Forwarder Docker Pulls Interface Docker Pulls Orchestrator Docker Pulls Processor Docker Pulls

Codecov

DOI

This project is part of the 2021 spring bachelor final project of the Bachelor of Computer Science at Utrecht University. The team that worked on the project consists of eleven students from the Bachelor of Computer Science and Bachelor of Game Technology. This project has been done for educational purposes. All code is open-source, and proper credit is given to respective parties.

GPU support

Updating/Installing drivers

Update the GPU drivers and restart the system for changes to take effect. Optionally, use a different driver listed after running ubuntu-drivers devices

sudo apt install nvidia-driver-460
sudo reboot

Installing the container toolkit

Add the distribution, update the package manager, install NVIDIA for Docker, and restart Docker for changes to take effect. For more information, look at the install guide

distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
   && curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt update
sudo apt install -y nvidia-docker2
sudo systemctl restart docker

Acquire the GPU ID

According to this read the GPU UUID like GPU-a1b2c3d (just the first part) from

nvidia-smi -a

Add the resource

Add the GPU UUID from the last step to the Docker engine configuration file typically at /etc/docker/daemon.json. Create the file if it does not exist yet.

{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  },
  "default-runtime": "nvidia",
  "node-generic-resources": ["gpu=GPU-a1b2c3d"]
}

Pylint

We use Pylint for python code quality assurance.

Installation

Input following command terminal:

pip install pylint

Run

To run linting on the entire repository, run the following command from the root: pylint CameraProcessor docs Interface ProcessorOrchestrator utility VideoForwarder --rcfile=.pylintrc --reports=n

Explanation

pylint --rcfile=.pylintrc --reports=n

is the Python module to run.

--rcfile is the linting specification used by Pylint.

--reports sets whether the full report should be displayed or not. Our recommendation would be n since this only displays linting errors/warnings and the eventual score.

Constraints

Pylint needs an __init__.py file in the subsystem root to parse all folders to lint. This run must be a subsystem since the root does not contain an __init__.py file.

Ignoring folders from linting

Some folders should be excluded from linting. The exclusion could be for multiple reasons like, the symlinked algorithms in the CameraProcessor folder or the Python virtual environment folder. Add folder name to ignore= in .pylintrc.

Comments
  • FFT: Spc 414 reconnect when stream suddenly stops

    FFT: Spc 414 reconnect when stream suddenly stops

    It works, but there are a lot of but's.

    If the forwarder comes back online and the processor reconnects and starts sending boxes again before the interface reloads, the sync should be fine

    If the forwarder comes back and the interface reloads before the processor starts sending boxes again, sync seems inconsistent. Sometimes it's fine, other times there is a small desync. Desync can be fixed fairly reliably by manually pausing the stream for a few seconds. I think this could be fixed with the 'hack' that makes the video jump a little bit after loading. But this was removed earlier because the jump was considered annoying and not a good fix.

    If the forwarder is not back yet when the interface reloads, there is a chance of ending up with a videojs error and will require a full page reload to fix.

    TL:DR: As long as the processor is sending boxes before the interface reloads sync should remain acceptable

    IMO it's at least better than nothing.

    opened by BrianVanB 2
  • FFT: Remove camera id from the configs.ini

    FFT: Remove camera id from the configs.ini

    Camera id should not be in configurations of cameraprocessor since it is required to be specified inside environment Otherwise it is possible to mistakenly start up camera processor without being guaranteed to have thought about the ID

    opened by GerardvSchie 1
  • Spc 801 pylint enforce class and file name equal

    Spc 801 pylint enforce class and file name equal

    class ITracker requires the file name: i_tracker.py class Tracker requires the file name: tracker.py

    Stricter linting implemented and impacted files are renamed according to the enforced standards

    opened by GerardvSchie 1
  • SPC-728 implement reidentification as a scheduler component

    SPC-728 implement reidentification as a scheduler component

    Extended scheduler to also use globals (objects that do not change during a scheduling iteration (one graph traversal).

    Allow multiple inputs to initial node (initially only 1, but 1 was required).

    Re-id stage and frame buffer (which used output of re-id stage) added to scheduler.

    Only schedules the start node if it is immediately ready, this may or may not be favourable and has the following consequences:

    • Only nodes connected to start node are executed, but only one start node is allowed.
    • If a node is included in plan but only via globals it will not get executed
    opened by tim-van-kemenade 1
  • FIX: SPC 662 fix warning when stream buffers

    FIX: SPC 662 fix warning when stream buffers

    If a stream buffers on first play it would spam the console with a warning saying something is undefined. Fixed by adding more checks if everything is defined before accessing.

    opened by BrianVanB 1
Releases(v1.0.0)
  • v1.0.0(Jun 29, 2021)

    Release v1.0.0

    The following release note will contain a brief overview of each component and its features. Underneath, The currently known bugs can be found.

    Features

    Processor

    The Camera Processor handles the core processing, using detection, tracking, and re-identification algorithms on an image or video feed. It can swap algorithms via new implementations of subclasses of the relevant superclass. Currently implemented are YOLOv5 and YOLOR for detection, SORT for tracking, and TorchReid and FastReid for re-identification.

    Multiple input methods

    The processor processes OpenCV frames. It can process any source that can be turned into a sequence of frames. The supported sources are implemented via a capture interface. The available captures are HLS, video stream, webcam, and an image folder. HLS is how a video feed is received via the internet, which performs extra work to add proper timestamps to the feed.

    Plug and play for main pipeline components

    The main pipeline contains a detection, tracking, and re-identification phase. All these phases are implemented and adhere to the interface belonging to the phase. Implementing another algorithm that conforms to this interface would allow for the algorithm to be loaded in via the configuration. This way, many different algorithms can be defined and swapped when needed.

    Scheduler

    Create a node structure representing a graph, and the scheduler will handle the scheduling of all nodes in each graph iteration. This prevents rewriting things like the pipeline for a more significant change in the form of the pipeline. These graphs are called plans, and thus multiple self-contained plans can be created that can also be swapped on-premise.

    Multiple output methods, deploy opencv tornado

    The processor has three output methods: deploy, opencv, and tornado. Deploy sends information about the processed frame to the orchestrator, which sends it to other processors or the interface. OpenCV displays the processed frames in an OpenCV window. Tornado displays the same OpenCV output but does so in a dedicated webpage. It is discouraged to use the tornado mode for anything other than development since it takes a heavy toll on performance.

    Training of algorithms

    Both the detection and the re-identifications algorithms can be trained with custom datasets. Instructions on how to train these individual components can be found here. The tracking is not based on a neural-network-based implementation and can therefore not be trained.

    Accuracy measurement and metrics

    Several metrics were implemented for determining the accuracy of the detection, the tracking and the re-identification. The detection uses the Mean Average Precision metric. The tracking uses the MOT metric. The re-identification uses the Mean Average Precision and Rank-1 metrics. An extensive explanation of the used accuracy metrics can be found here

    Interface

    A tornado-based webpage interface is used to view the video feeds as well as the detected bounding boxes. It features automated syncing for different camera feeds and their bounding boxes. It has options to select classification types to detect and swap camera focus. The user can click on a bounding box to start tracking an object. The interface features a timeline that keeps track of when and for how long a subject has appeared on each camera for a clear overview.

    Automated bounding box syncing

    When the interface received bounding boxes from the orchestrator and a video stream from the forwarder, it will try to match each box to the frame it belongs to. This is done internally using frame ids. This prevents the user from manually setting the box/video delay to synchronize them.

    Timelines

    Timelines is a page where the history of all tracked objects can be found back. This can be useful to see where an object was during the time it was tracked. When an object is still being tracked, the cutout will be visible next to the object id.

    Forwarder

    Adaptive bitrate

    The forwarder can convert a single incoming stream (like RTMP or RTSP) to multiple bitrate output streams. This way, the stream bitrate can be adapted according to available bandwidth.

    Other

    Security

    OAuth2 is used to make sure only authorized people can access services they should be able to access. Using authentication is optional and can be ignored when developing or testing.

    Docker Images

    Each component contains a Dockerfile used to build images. These images are publicly available on Dockerhub. This allows for easy downloading and deployment.

    Known bugs

    Syncing

    The synchronization of the bounding boxes and the video stream on the interface sometimes mismatch, causing the bounding boxes to have an offset compared to the expected location. Sometimes this can be fixed by pausing the video for a few seconds, but not always.

    Authentication between processor and forwarder

    The OpenCV library to pull in the video from the forwarder does not allow any header to be added to the requests. This means that authentication needs to be disabled for local requests. Luckily most orchestration tools (like docker swarm) allow for a selective port opening to the outside. We allowed unauthenticated forwarder access over port 80 on HTTP (as auth should not be done over an unencrypted connection), which can be used by the processors.

    Processor does not properly handle memory paging on some computers

    This issue only occurred on one computer which had too little memory too handle the processor. The team could not reproduce the bug on other computers that had memory constraints. On this computer, the paging file size keeps increasing until there is no more disk space left, eventually resulting in a processor crash. The processors memory profile does not grow over time thus a system that has enough memory to run for 10 minutes should be able to run for 24 hours or longer. The only memory consumption increasing over time is the feature maps of tracked objects. But these vectors take up little space, and it is generally expected that there are not that many tracked objects.

    Source code(tar.gz)
    Source code(zip)
This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis, accepted at EMNLP 2021.

MultiModal-InfoMax This repository contains the official implementation code of the paper Improving Multimodal Fusion with Hierarchical Mutual Informa

Deep Cognition and Language Research (DeCLaRe) Lab 89 Dec 26, 2022
A 10000+ hours dataset for Chinese speech recognition

WenetSpeech Official website | Paper A 10000+ Hours Multi-domain Chinese Corpus for Speech Recognition Download Please visit the official website, rea

310 Jan 03, 2023
[CVPR 2021] Pytorch implementation of Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs

Hijack-GAN: Unintended-Use of Pretrained, Black-Box GANs In this work, we propose a framework HijackGAN, which enables non-linear latent space travers

Hui-Po Wang 46 Sep 05, 2022
An implementation of EWC with PyTorch

EWC.pytorch An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural

Ryuichiro Hataya 166 Dec 22, 2022
Official code for article "Expression is enough: Improving traffic signal control with advanced traffic state representation"

1 Introduction Official code for article "Expression is enough: Improving traffic signal control with advanced traffic state representation". The code s

Liang Zhang 10 Dec 10, 2022
FPSAutomaticAiming——基于YOLOV5的FPS类游戏自动瞄准AI

FPSAutomaticAiming——基于YOLOV5的FPS类游戏自动瞄准AI 声明: 本项目仅限于学习交流,不可用于非法用途,包括但不限于:用于游戏外挂等,使用本项目产生的任何后果与本人无关! 简介 本项目基于yolov5,实现了一款FPS类游戏(CF、CSGO等)的自瞄AI,本项目旨在使用现

Fabian 246 Dec 28, 2022
DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene.

DirectVoxGO reconstructs a scene representation from a set of calibrated images capturing the scene. We achieve NeRF-comparable novel-view synthesis quality with super-fast convergence.

sunset 709 Dec 31, 2022
The official repo for OC-SORT: Observation-Centric SORT on video Multi-Object Tracking. OC-SORT is simple, online and robust to occlusion/non-linear motion.

OC-SORT Observation-Centric SORT (OC-SORT) is a pure motion-model-based multi-object tracker. It aims to improve tracking robustness in crowded scenes

Jinkun Cao 325 Jan 05, 2023
Official implementation of the ICCV 2021 paper "Joint Inductive and Transductive Learning for Video Object Segmentation"

JOINT This is the official implementation of Joint Inductive and Transductive learning for Video Object Segmentation, to appear in ICCV 2021. @inproce

Yunyao 35 Oct 16, 2022
AI-Bot - 一个基于watermelon改造的OpenAI-GPT-2的智能机器人

AI-Bot 一个基于watermelon改造的OpenAI-GPT-2的智能机器人 在Binder上直接运行测试 目前有两种实现方式 TF2的GPT-2 TF

9 Nov 16, 2022
Official implementation of "Learning Proposals for Practical Energy-Based Regression", 2021.

ebms_proposals Official implementation (PyTorch) of the paper: Learning Proposals for Practical Energy-Based Regression, 2021 [arXiv] [project]. Fredr

Fredrik Gustafsson 10 Oct 22, 2022
PyTorch implementation of a collections of scalable Video Transformer Benchmarks.

PyTorch implementation of Video Transformer Benchmarks This repository is mainly built upon Pytorch and Pytorch-Lightning. We wish to maintain a colle

Xin Ma 156 Jan 08, 2023
Trajectory Variational Autoencder baseline for Multi-Agent Behavior challenge 2022

MABe_2022_TVAE: a Trajectory Variational Autoencoder baseline for the 2022 Multi-Agent Behavior challenge This repository contains jupyter notebooks t

Andrew Ulmer 15 Nov 08, 2022
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Trevor Ablett*, Bryan Chan*,

STARS Laboratory 8 Sep 14, 2022
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
Process JSON files for neural recording sessions using Medtronic's BrainSense Percept PC neurostimulator

percept_processing This code processes JSON files for streamed neural data using Medtronic's Percept PC neurostimulator with BrainSense Technology for

Maria Olaru 3 Jun 06, 2022
CoINN: Correlated-informed neural networks: a new machine learning framework to predict pressure drop in micro-channels

CoINN: Correlated-informed neural networks: a new machine learning framework to predict pressure drop in micro-channels Accurate pressure drop estimat

Alejandro Montanez 0 Jan 21, 2022
Image-Scaling Attacks and Defenses

Image-Scaling Attacks & Defenses This repository belongs to our publication: Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck. Ad

Erwin Quiring 163 Nov 21, 2022
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

MGANs Training & Testing code (torch), pre-trained models and supplementary materials for "Precomputed Real-Time Texture Synthesis with Markovian Gene

290 Nov 15, 2022
AgeGuesser: deep learning based age estimation system. Powered by EfficientNet and Yolov5

AgeGuesser AgeGuesser is an end-to-end, deep-learning based Age Estimation system, presented at the CAIP 2021 conference. You can find the related pap

5 Nov 10, 2022