Deep learned, hardware-accelerated 3D object pose estimation

Overview

Isaac ROS Pose Estimation

Overview

This repository provides NVIDIA GPU-accelerated packages for 3D object pose estimation. Using a deep learned pose estimation model and a monocular camera, the isaac_ros_dope and isaac_ros_centerpose package can estimate the 6DOF pose of a target object.

Packages in this repository rely on accelerated DNN model inference using Triton or TensorRT from Isaac ROS DNN Inference.

System Requirements

This Isaac ROS package is designed and tested to be compatible with ROS2 Foxy on Jetson hardware, in addition to on x86 systems with an Nvidia GPU. On x86 systems, packages are only supported when run in the provided Isaac ROS Dev Docker container.

Jetson

  • AGX Xavier or Xavier NX
  • JetPack 4.6

x86_64 (in Isaac ROS Dev Docker Container)

  • CUDA 11.1+ supported discrete GPU
  • VPI 1.1.11
  • Ubuntu 20.04+

Note: For best performance on Jetson, ensure that power settings are configured appropriately (Power Management for Jetson).

Docker

You need to use the Isaac ROS development Docker image from Isaac ROS Common, based on the version 21.08 image from Deep Learning Frameworks Containers.

You must first install the NVIDIA Container Toolkit to make use of the Docker container development/runtime environment.

Configure nvidia-container-runtime as the default runtime for Docker by editing /etc/docker/daemon.json to include the following:

    "runtimes": {
        "nvidia": {
            "path": "nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"

and then restarting Docker: sudo systemctl daemon-reload && sudo systemctl restart docker

Run the following script in isaac_ros_common to build the image and launch the container on x86_64 or Jetson:

$ scripts/run_dev.sh <optional_path>

Dependencies

Setup

  1. Create a ROS2 workspace if one is not already prepared:

    mkdir -p your_ws/src
    

    Note: The workspace can have any name; this guide assumes you name it your_ws.

  2. Clone the Isaac ROS Pose Estimation, Isaac ROS DNN Inference, and Isaac ROS Common package repositories to your_ws/src. Check that you have Git LFS installed before cloning to pull down all large files:

    sudo apt-get install git-lfs
    
    cd your_ws/src   
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros/isaac_ros_pose_estimation
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros/isaac_ros_dnn_inference
    git clone https://github.com/NVIDIA-ISAAC-ROS/isaac_ros/isaac_ros_common
    
  3. Start the Docker interactive workspace:

    isaac_ros_common/scripts/run_dev.sh your_ws
    

    After this command, you will be inside of the container at /workspaces/isaac_ros-dev. Running this command in different terminals will attach to the same container.

    Note: The rest of this README assumes that you are inside this container.

  4. Build and source the workspace:

    cd /workspaces/isaac_ros-dev
    colcon build && . install/setup.bash
    

    Note: We recommend rebuilding the workspace each time when source files are edited. To rebuild, first clean the workspace by running rm -r build install log.

  5. (Optional) Run tests to verify complete and correct installation:

    colcon test --executor sequential
    

Package Reference

isaac_ros_dope

Overview

The isaac_ros_dope package offers functionality for detecting objects of a specific object type in images and estimating these objects' 6 DOF (degrees of freedom) poses using a trained DOPE (Deep Object Pose Estimation) model. This package sets up pre-processing using the DNN Image Encoder node, inference on images by leveraging the TensorRT node and provides a decoder that converts the DOPE network's output into an array of 6 DOF poses.

The model provided is taken from the official DOPE Github repository published by NVIDIA Research. To get a model, visit the PyTorch DOPE model collection here, and use the script under isaac_ros_dope/scripts to convert the PyTorch model to ONNX, which can be ingested by the TensorRT node (this script can only be executed on an x86 machine). However, the package should also work if you train your own DOPE model that has an input image size of [480, 640]. For instructions to train your own DOPE model, check out the README in the official DOPE Github repository.

Package Dependencies

Available Components

Component Topics Subscribed Topics Published Parameters
DopeDecoderNode belief_map_array: The tensor that represents the belief maps, which are outputs from the DOPE network dope/pose_array: An array of poses of the objects detected by the DOPE network and interpreted by the DOPE decoder node. queue_size: The length of the subscription queues, which is rmw_qos_profile_default.depth by default
frame_id: The frame ID that the DOPE decoder node will write to the header of its output messages
configuration_file: The name of the configuration file to parse. Note: The node will look for that file name under isaac_ros_dope/config. By default there is a configuration file under that directory named dope_config.yaml.
object_name: The object class the DOPE network is detecting and the DOPE decoder is interpreting. This name should be listed in the configuration file along with its corresponding cuboid dimensions.

Configuration

You will need to specify an object type in the DopeDecoderNode that is listed in the dope_config.yaml file, so the DOPE decoder node will pick the right parameters to transform the belief maps from the inference node to object poses. The dope_config.yaml file uses the camera intrinsics of Realsense by default - if you are using a different camera, you will need to modify the camera_matrix field with the new, scaled (640x480) camera intrinsics.

isaac_ros_centerpose

Overview

The isaac_ros_centerpose package offers functionality for detecting objects of a specific class in images and estimating these objects' 6 DOF (degrees of freedom) poses using a trained CenterPose model. Just like DOPE, this package sets up pre-processing using the DNN Image Encoder node, inference on images by leveraging an inference node (either TensorRT or Triton node) and provides a decoder that converts the CenterPose network's output into an array of 6 DOF poses.

The model provided is taken from the official CenterPose Github repository published by NVIDIA Research. To get a model, visit the PyTorch CenterPose model collection here, and use the script under isaac_ros_centerpose/scripts to convert the PyTorch model to ONNX, which can be ingested by the TensorRT node. However, the package should also work if you train your own CenterPose model that has an input image size of [512, 512]. For instructions to train your own CenterPose model, check out the README in the official CenterPose Github repository.

Package Dependencies

Available Components

Component Topics Subscribed Topics Published Parameters
CenterPoseDecoderNode tensor_sub: The TensorList that contains the outputs of the CenterPose network object_poses: A MarkerArray representing the poses of objects detected by the CenterPose network and interpreted by the CenterPose decoder node. camera_matrix: A row-major array of 9 floats that represent the camera intrinsics matrix K.
original_image_size: An array of two floats that represent the size of the original image passed into the image encoder. The first element needs to be width, and the second element needs to be height.
output_field_size: An array of two integers that represent the size of the 2D keypoint decoding from the network output. The value by default is [128, 128].
height: This parameter is used to scale the cuboid used for calculating the size of the objects detected.
frame_id: The frame ID that the DOPE decoder node will write to the header of its output messages. The default value is set to centerpose.
marker_color: An array of 4 floats representing RGBA that will be used to define the color that will be used by RViz to visualize the marker. Each value should be between 0.0 and 1.0. The default value is set to (1.0, 0.0, 0.0, 1.0), which is red.

Configuration

The default parameters for the CenterPoseDecoderNode is defined in the decoders_param.yaml file under isaac_ros_centerpose/config. The dope_config.yaml file uses the camera intrinsics of Realsense by default - if you are using a different camera, you will need to modify the camera_matrix field.

Network Outputs

The CenterPose network has 7 different outputs:

Output Name Meaning
hm Object center heatmap
wh 2D bounding box size
hps Keypoint displacements
reg Sub-pixel offset
hm_hp Keypoint heatmaps
hp_offset Sub-pixel offsets for keypoints
scale Relative cuboid dimensions

For more context and explanation, you can find the corresponding outputs in Figure 2 of the CenterPose paper and refer to the paper.

Walkthroughs

Inference on DOPE using TensorRT

  1. Select a DOPE model by visiting the DOPE model collection available on the official DOPE GitHub repository here. For example, download Ketchup.pth into /tmp/models.

  2. In order to run PyTorch models with TensorRT, one option is to export the model into an ONNX file using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py:

    python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup.pth
    

    The output ONNX file will be located at /tmp/models/Ketchup.onnx.

    Note: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop/resize using ROS2 nodes from Isaac ROS Image Pipeline or similar packages.

  3. Modify the following values in the launch file /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_tensor_rt.launch.py:

    'model_file_path': '/tmp/models/Ketchup.onnx'
    'object_name': 'Ketchup'
    

    Note: Modify parameters object_name and model_file_path in the launch file if you are using another model.object_name should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object.

  4. Rebuild and source isaac_ros_dope:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_dope && . install/setup.bash
    
  5. Start isaac_ros_dope using the launch file:

    ros2 launch /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_tensor_rt.launch.py
    
  6. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src 
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  7. Start publishing images to topic /image using image_publisher, the topic that the encoder is subscribed to.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/resources/0002_rgb.jpg --ros-args -r image_raw:=image
    
  8. Open another terminal window. You should be able to get the poses of the objects in the images through ros2 topic echo:

    source /workspaces/isaac_ros-dev/install/setup.bash
    ros2 topic echo /poses
    

    We are echoing the topic /poses because we remapped the original topic name /dope/pose_array to /poses in our launch file.

  9. Launch rviz2. Click on Add button, select "By topic", and choose PoseArray under /poses. Update "Displays" parameters as shown in the following to see the axes of the object displayed.

Note: For best results, crop/resize input images to the same dimensions your DNN model is expecting.

Inference on DOPE using Triton

  1. Select a DOPE model by visiting the DOPE model collection available on the official DOPE GitHub repository here. For example, download Ketchup.pth into /tmp/models/Ketchup.

  2. Setup model repository.

    Create a models repository with version 1:

    mkdir -p /tmp/models/Ketchup/1
    

    Create a configuration file for this model at path /tmp/models/Ketchup/config.pbtxt. Note that name has to be the same as the model repository.

    name: "Ketchup"
    platform: "onnxruntime_onnx"
    max_batch_size: 0
    input [
      {
        name: "INPUT__0"
        data_type: TYPE_FP32
        dims: [ 1, 3, 480, 640 ]
      }
    ]
    output [
      {
        name: "OUTPUT__0"
        data_type: TYPE_FP32
        dims: [ 1, 25, 60, 80 ]
      }
    ]
    version_policy: {
      specific {
        versions: [ 1 ]
      }
    }
    
    • To run ONNX models with Triton, export the model into an ONNX file using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py:

      python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.onnx --input_name INPUT__0 --output_name OUTPUT__0
      

      Note: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop/resize using ROS2 nodes from Isaac ROS Image Pipeline or similar packages. The model name has to be model.onnx.

    • To run TensorRT engine plan file with Triton, export the ONNX model into an TensorRT engine plan file using the builtin TensorRT converter trtexec:

      /usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/Ketchup/1/model.onnx --saveEngine=/tmp/models/Ketchup/1/model.plan
      

      Modify the following value in /tmp/models/Ketchup/config.pbtxt:

      platform: "tensorrt_plan"
      
    • To run PyTorch model with Triton (inferencing PyTorch model is supported for x86_64 platform only), the model needs to be saved using torch.jit.save(). The downloaded DOPE model is saved with torch.save(). Export the DOPE model using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py:

      python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format pytorch --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.pt
      

      Modify the following value in /tmp/models/Ketchup/config.pbtxt:

      platform: "pytorch_libtorch"
      
  3. Modify the following values in the launch file /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_triton.launch.py:

    'model_name': 'Ketchup'
    'model_repository_paths': ['/tmp/models']
    'input_binding_names': ['INPUT__0']
    'output_binding_names': ['OUTPUT__0']
    'object_name': 'Ketchup'
    

    Note: object_name should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object.

  4. Rebuild and source isaac_ros_dope:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_dope && . install/setup.bash
    
  5. Start isaac_ros_dope using the launch file:

    ros2 launch /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/launch/isaac_ros_dope_triton.launch.py
    
  6. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  7. Start publishing images to topic /image using image_publisher, the topic that the encoder is subscribed to.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/resources/0002_rgb.jpg --ros-args -r image_raw:=image
    
  8. Open another terminal window. You should be able to get the poses of the objects in the images through ros2 topic echo:

    source /workspaces/isaac_ros-dev/install/setup.bash
    ros2 topic echo /poses
    

    We are echoing the topic /poses because we remapped the original topic name /dope/pose_array to /poses in our launch file.

  9. Launch rviz2. Click on Add button, select "By topic", and choose PoseArray under /poses. Update "Displays" parameters to see the axes of the object displayed.

Note: For best results, crop/resize input images to the same dimensions your DNN model is expecting.

Inference on CenterPose using Triton

  1. Select a CenterPose model by visiting the CenterPose model collection available on the official CenterPose GitHub repository here. For example, download shoe_resnet_140.pth into /tmp/models/centerpose_shoe.

Note: The models in the root directory of the model collection listed above will NOT WORK with our inference nodes because they have custom layers not supported by TensorRT nor Triton. Make sure to use the PyTorch weights that have the string resnet in their file names.

  1. Setup model repository.

    Create a models repository with version 1:

    mkdir -p /tmp/models/centerpose_shoe/1
    
  2. Create a configuration file for this model at path /tmp/models/centerpose_shoe/config.pbtxt. Note that name has to be the same as the model repository name. Take a look at the example at isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt and copy that file to /tmp/models/centerpose_shoe/config.pbtxt.

    cp /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt /tmp/models/centerpose_shoe/config.pbtxt
    
  3. To run the TensorRT engine plan, convert the PyTorch model to ONNX first. Export the model into an ONNX file using the script provided under /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py:

    python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py --input /tmp/models/centerpose_shoe/shoe_resnet_140.pth --output /tmp/models/centerpose_shoe/1/model.onnx
    
  4. To get a TensorRT engine plan file with Triton, export the ONNX model into an TensorRT engine plan file using the builtin TensorRT converter trtexec:

    /usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/centerpose_shoe/1/model.onnx --saveEngine=/tmp/models/centerpose_shoe/1/model.plan
    
  5. Modify the isaac_ros_centerpose launch file located in /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/launch/isaac_ros_centerpose.launch.py. You will need to update the following lines as:

    'model_name': 'centerpose_shoe',
    'model_repository_paths': ['/tmp/models'],
    

    Rebuild and source isaac_ros_centerpose:

    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to isaac_ros_centerpose && . install/setup.bash
    

    Start isaac_ros_centerpose using the launch file:

    ros2 launch isaac_ros_centerpose isaac_ros_centerpose.launch.py
    
  6. Setup image_publisher package if not already installed.

    cd /workspaces/isaac_ros-dev/src
    git clone --single-branch -b ros2 https://github.com/ros-perception/image_pipeline.git
    cd /workspaces/isaac_ros-dev
    colcon build --packages-up-to image_publisher && . install/setup.bash
    
  7. Start publishing images to topic /image using image_publisher, the topic that the encoder is subscribed to.

    ros2 run image_publisher image_publisher_node /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/resources/shoe.jpg --ros-args -r image_raw:=image
    
  8. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through ros2 topic echo:

    source /workspaces/isaac_ros-dev/install/setup.bash
    ros2 topic echo /object_poses
    
  9. Launch rviz2. Click on Add button, select "By topic", and choose MarkerArray under /object_poses. Set the fixed frame to centerpose. You'll be able to see the cuboid marker representing the object's pose detected!

Troubleshooting

Nodes crashed on initial launch reporting shared libraries have a file format not recognized

Many dependent shared library binary files are stored in git-lfs. These files need to be fetched in order for Isaac ROS nodes to function correctly.

Symptoms

/usr/bin/ld:/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_common/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so: file format not recognized; treating as linker script
/usr/bin/ld:/workspaces/isaac_ros-dev/ros_ws/src/isaac_ros_common/isaac_ros_nvengine/gxf/lib/gxf_jetpack46/core/libgxf_core.so:1: syntax error
collect2: error: ld returned 1 exit status
make[2]: *** [libgxe_node.so] Error 1
make[1]: *** [CMakeFiles/gxe_node.dir/all] Error 2
make: *** [all] Error 2

Solution

Run git lfs pull in each Isaac ROS repository you have checked out, especially isaac_ros_common, to ensure all of the large binary files have been downloaded.

Updates

Date Changes
2021-10-20 Initial release
You might also like...
Single-Stage 6D Object Pose Estimation, CVPR 2020
Single-Stage 6D Object Pose Estimation, CVPR 2020

Overview This repository contains the code for the paper Single-Stage 6D Object Pose Estimation. Yinlin Hu, Pascal Fua, Wei Wang and Mathieu Salzmann.

Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image
Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image

CenterPose Overview This repository is the official implementation of the paper "Single-stage Keypoint-based Category-level Object Pose Estimation fro

PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation
PyTorch implemention of ICCV'21 paper SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation

SGPA: Structure-Guided Prior Adaptation for Category-Level 6D Object Pose Estimation This is the PyTorch implemention of ICCV'21 paper SGPA: Structure

[CVPR 2022] Pytorch implementation of
[CVPR 2022] Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions" paper

template-pose Pytorch implementation of "Templates for 3D Object Pose Estimation Revisited: Generalization to New objects and Robustness to Occlusions

[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation
[CVPR 2022 Oral] EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation

EPro-PnP EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation In CVPR 2022 (Oral). [paper] Hanshen

Object detection on multiple datasets with an automatically learned unified label space.
Object detection on multiple datasets with an automatically learned unified label space.

Simple multi-dataset detection An object detector trained on multiple large-scale datasets with a unified label space; Winning solution of E

Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs.
Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs.

Lunar Lunar is a neural network aimbot that uses real-time object detection accelerated with CUDA on Nvidia GPUs. About Lunar can be modified to work

The project is an official implementation of our CVPR2019 paper
The project is an official implementation of our CVPR2019 paper "Deep High-Resolution Representation Learning for Human Pose Estimation"

Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) News [2020/07/05] A very nice blog from Towards Data Science introd

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)
Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021)

Deep Dual Consecutive Network for Human Pose Estimation (CVPR2021) Introduction This is the official code of Deep Dual Consecutive Network for Human P

Comments
  • Question about deformable convolution

    Question about deformable convolution

    Hi, First of all Thank you. I have one query. Centre pose uses Deformable convolution layer. But in the "centerpose_pytorch2onnx.py" script there is no Deformable convolution. Does the converted ONNXmodel contain Deformable convolution layer ? Thank you .

    opened by Shaaa10 1
  • Can't use it with usb cameras with ros2 node

    Can't use it with usb cameras with ros2 node

    Hello,

    I was trying to use this repo inside docker. It runs well as you gave the demo ros bag but with my USB camera with the ros2 usb_cam node running, it gives the following result:

    [component_container_mt-1] [WARN] [1667762069.374893820] [dope_decoder]: [NitrosPublisherSubscriberBase] Failed to get timestamp from a NITROS message (eid=77309)
    

    Here is the node: https://github.com/ros-drivers/usb_cam

    git checkout ros2

    Note:

    • changed the image_raw topic to image to feed this rosnode and
    • changed image height to 1080 and image width to 1920
    • the encoding is rgb8
    needs info 
    opened by ArghyaChatterjee 3
Releases(v0.20.0-dp)
Owner
NVIDIA Isaac ROS
High-performance computing for robotics
NVIDIA Isaac ROS
A framework that allows people to write their own Rocket League bots.

YOU PROBABLY SHOULDN'T PULL THIS REPO Bot Makers Read This! If you just want to make a bot, you don't need to be here. Instead, start with one of thes

543 Dec 20, 2022
Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning

advantage-weighted-regression Implementation of Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning, by Peng et al. (

Omar D. Domingues 1 Dec 02, 2021
Demonstrational Session git repo for H SAF User Workshop (28/1)

5th H SAF User Workshop The 5th H SAF User Workshop supported by EUMeTrain will be held in online in January 24-28 2022. This repository contains inst

H SAF 4 Aug 04, 2022
Supplementary code for the paper "Meta-Solver for Neural Ordinary Differential Equations" https://arxiv.org/abs/2103.08561

Meta-Solver for Neural Ordinary Differential Equations Towards robust neural ODEs using parametrized solvers. Main idea Each Runge-Kutta (RK) solver w

Julia Gusak 25 Aug 12, 2021
Unofficial PyTorch Implementation of "Augmenting Convolutional networks with attention-based aggregation"

Pytorch Implementation of Augmenting Convolutional networks with attention-based aggregation This is the unofficial PyTorch Implementation of "Augment

DK 20 Sep 09, 2022
Log4j JNDI inj. vuln scanner

Log-4-JAM - Log 4 Just Another Mess Log4j JNDI inj. vuln scanner Requirements pip3 install requests_toolbelt Usage # make sure target list has http/ht

Ashish Kunwar 66 Nov 09, 2022
the code used for the preprint Embedding-based Instance Segmentation of Microscopy Images.

EmbedSeg Introduction This repository hosts the version of the code used for the preprint Embedding-based Instance Segmentation of Microscopy Images.

JugLab 88 Dec 25, 2022
Action Segmentation Evaluation

Reference Action Segmentation Evaluation Code This repository contains the reference code for action segmentation evaluation. If you have a bug-fix/im

5 May 22, 2022
A pytorch implementation of Paper "Improved Training of Wasserstein GANs"

WGAN-GP An pytorch implementation of Paper "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, SciPy, Matplotlib A recent NVIDIA GPU

Marvin Cao 1.4k Dec 14, 2022
Official Pytorch implementation for "End2End Occluded Face Recognition by Masking Corrupted Features, TPAMI 2021"

End2End Occluded Face Recognition by Masking Corrupted Features This is the Pytorch implementation of our TPAMI 2021 paper End2End Occluded Face Recog

Haibo Qiu 25 Oct 31, 2022
Make your master artistic punk avatar through machine learning world famous paintings.

Master-art-punk Make your master artistic punk avatar through machine learning world famous paintings. 通过机器学习世界名画制作属于你的大师级艺术朋克头像 Nowadays, NFT is beco

Philipjhc 53 Dec 27, 2022
YOLOv7 - Framework Beyond Detection

🔥🔥🔥🔥 YOLO with Transformers and Instance Segmentation, with TensorRT acceleration! 🔥🔥🔥

JinTian 3k Jan 01, 2023
NEG loss implemented in pytorch

Pytorch Negative Sampling Loss Negative Sampling Loss implemented in PyTorch. Usage neg_loss = NEG_loss(num_classes, embedding_size) optimizer =

Daniil Gavrilov 123 Sep 13, 2022
C3D is a modified version of BVLC caffe to support 3D ConvNets.

C3D C3D is a modified version of BVLC caffe to support 3D convolution and pooling. The main supporting features include: Training or fine-tuning 3D Co

Meta Archive 1.1k Nov 14, 2022
The project is an official implementation of our paper "3D Human Pose Estimation with Spatial and Temporal Transformers".

3D Human Pose Estimation with Spatial and Temporal Transformers This repo is the official implementation for 3D Human Pose Estimation with Spatial and

Ce Zheng 363 Dec 28, 2022
Codes for Causal Semantic Generative model (CSG), the model proposed in "Learning Causal Semantic Representation for Out-of-Distribution Prediction" (NeurIPS-21)

Learning Causal Semantic Representation for Out-of-Distribution Prediction This repository is the official implementation of "Learning Causal Semantic

Chang Liu 54 Dec 01, 2022
Yolo Traffic Light Detection With Python

Yolo-Traffic-Light-Detection This project is based on detecting the Traffic light. Pretained data is used. This application entertained both real time

Ananta Raj Pant 2 Aug 08, 2022
Serve TensorFlow ML models with TF-Serving and then create a Streamlit UI to use them

TensorFlow Serving + Streamlit! ✨ 🖼️ Serve TensorFlow ML models with TF-Serving and then create a Streamlit UI to use them! This is a pretty simple S

Álvaro Bartolomé 18 Jan 07, 2023
VOLO: Vision Outlooker for Visual Recognition

VOLO: Vision Outlooker for Visual Recognition, arxiv This is a PyTorch implementation of our paper. We present Vision Outlooker (VOLO). We show that o

Sea AI Lab 876 Dec 09, 2022
A Small and Easy approach to the BraTS2020 dataset (2D Segmentation)

BraTS2020 A Light & Scalable Solution to BraTS2020 | Medical Brain Tumor Segmentation (2D Segmentation) Developed the segmentation models for segregat

Gunjan Haldar 0 Jan 19, 2022