The Hailo Model Zoo includes pre-trained models and a full building and evaluation environment

Overview

Hailo Model Zoo

The Hailo Model Zoo provides pre-trained models for high-performance deep learning applications. Using the Hailo Model Zoo you can measure the full precision accuracy of each model, the quantized accuracy using the Hailo Emulator and measure the accuracy on the Hailo-8 device. Finally, you will be able to generate the Hailo Executable Format (HEF) binary file to speed-up development and generate high quality applications accelerated with Hailo-8. The models are optimized for high accuracy on public datasets and can be used to benchmark the Hailo quantization scheme.

Usage

Quick Start Guide

  • Install the Hailo Dataflow Compiler and enter the virtualenv. In case you are not Hailo customer please contact hailo.ai
  • Clone the Hailo Model Zoo
git clone https://github.com/hailo-ai/hailo_model_zoo.git
  • Run the setup script
cd hailo_model_zoo; pip install -e .
  • Run the Hailo Model Zoo. For example, to parse the YOLOv3 model:
python hailo_model_zoo/main.py parse yolov3

Getting Started

For further functionality please see the GETTING_STARTED page (full install instructions and usage examples). The Hailo Model Zoo is using the Hailo Dataflow Compiler for parsing, quantization, emulation and compilation of the deep learning models. Full functionality includes:

  • Parse: model translation of the input model into Hailo's internal representation.
  • Profiler: generate profiler report of the model. The report contains information about your model and expected performance on the Hailo hardware.
  • Quantize: numeric translation of the input model into a compressed integer representation.
  • Compile: run the Hailo compiler to generate the Hailo Executable Format file (HEF) which can be executed on the Hailo hardware.
  • Evaluate: infer the model using the Hailo Emulator or the Hailo hardware and produce the model accuracy.

For further information about the Hailo Dataflow Compiler please contact hailo.ai.

Models

Full list of pre-trained models can be found here.

License

The Hailo Model Zoo is released under the MIT license. Please see the LICENSE file for more information.

Contact

Please visit hailo.ai for support / requests / issues.

Comments
  • yolov7.hef vs yolov5m_wo_spp_60p.hef

    yolov7.hef vs yolov5m_wo_spp_60p.hef

    Hi,

    As far as I know yolov7 faster and more accurate than the yolov5.

    In our tests :

    gst-launch-1.0 rtspsrc location=rtsp://xxxxx/ISAPI/Streaming/Channels/101 name=src_0 ! decodebin ! videoscale ! video/x-raw, pixel-aspect-ratio=1/1 ! videoconvert ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailonet hef-path=/local/shared_with_docker/yolov7.hef is-active=true batch-size=1 ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailofilter function-name=yolov5 so-path=/local/workspace/tappas/apps/gstreamer/libs/post_processes//libyolo_post.so config-path=/local/workspace/tappas/apps/gstreamer/general/detection/resources/configs/yolov5.json qos=false ! queue leaky=no max-size-buffers=30 max-size-bytes=0 max-size-time=0 ! hailooverlay ! videoconvert ! fpsdisplaysink video-sink=xvimagesink name=hailo_display sync=false text-overlay=false -v | grep -e hailo_display -e hailodevicestats

    yolov7.hef almost 7 times slower than the yolov5m_wo_spp_60p.hef version.

    opened by MyraBaba 19
  • Error: Model uses too many reources: 136 Layer-Controllers

    Error: Model uses too many reources: 136 Layer-Controllers

    onnx: 1.11.0
    torch: 1.12.1
    torchvision: 0.13.1
    

    Hi, I have fine-tuned yolov5m_wo_spp.pt model in the yolov5 v6.2 framework. Then I have exported the model to onnx (with opset 11) also in the yolov5 v6.2 framework. When I compile this onnx model with hailomz compile, the model optimization is done correctly, but then it throws following error:

    997/1000 [============================>.] - ETA: 1s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 998/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74 999/1000 [============================>.] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - ETA: 0s - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv741000/1000 [==============================] - 477s 477ms/step - total_distill_loss: 0.1018 - _distill_loss_yolov5m_wo_spp/conv93: 0.0353 - _distill_loss_yolov5m_wo_spp/conv84: 0.0367 - _distill_loss_yolov5m_wo_spp/conv74: 0.0297
    [info] Fine Tune is done (completion time is 00:38:18.70)
    Calibration: 64entries [00:48,  1.32entries/s]
    
    [info] Model Optimization is done
    [info] Loading model script on yolov5m_wo_spp
    [info] Loading network parameters
    [info] Starting Hailo allocation and compilation flow
    [info] Using Single-context flow
    [info] Resources optimization guidelines: Strategy -> GREEDY Objective -> REQUIRED_FPS
    [info] Resources optimization params: max_control_utilization=120%, max_compute_utilization=100%, max_memory_utilization (weights)=100%, max_input_aligner_utilization=100%, max_apu_utilization=100%
    [info] Running Auto-Merger
    [info] Auto-Merger is done
    [info] Adding a portal between conv27( index=19 604, name=conv27, ) and concat7, type: L4
    [info] Starting context partition
    [info] Context partition is done (0s 2ms)
    [info] Adding format conversion layer 'auto_reshape_from_input_layer1_to_merged_layer_normalization1_space_to_depth1' after input_layer1
    [info] Adding format conversion layer 'auto_reshape_from_conv74_to_output_layer1' after conv74
    [info] Adding format conversion layer 'auto_reshape_from_conv84_to_output_layer2' after conv84
    [info] Adding format conversion layer 'auto_reshape_from_conv93_to_output_layer3' after conv93
    Model uses too many reources: 136 Layer-Controllers
    [critical] Model uses too many reources: 136 Layer-Controllers
    [error] Failed to produce compiled graph
    [error] Tried to deserialize allocator result on failure, but got another exception: No output graph, deserialization failed.
    Traceback (most recent call last):
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/bin/hailomz", line 33, in <module>
        sys.exit(load_entry_point('hailo-model-zoo', 'console_scripts', 'hailomz')())
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 181, in main
        run(args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main.py", line 170, in run
        return handlers[args.command](args)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/main_driver.py", line 132, in compile
        compile_model(runner, network_info, args.results_dir, model_script_path=args.model_script_path)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/sources/model_zoo/hailo_model_zoo/core/main_utils.py", line 298, in compile_model
        hef = runner.compile()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 661, in compile
        return self._get_hef_hw_representation()
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_common/states/states.py", line 16, in wrapped_func
        return func(self, *args, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/runner/client_runner.py", line 707, in _get_hef_hw_representation
        serialized_hef = self._sdk_backend.get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1156, in get_hef_hw_representation
        hef, mapped_graph_file = self._get_hef_hw_representation(fps, allocator_script, mapping_timeout)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1151, in _get_hef_hw_representation
        hef, mapped_graph_file, auto_alls = self.hef_full_build(fps, mapping_timeout, model_params, allocator_script)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/sdk_backend/sdk_backend.py", line 1128, in hef_full_build
        auto_alls, self._mapped_graph, self._integrated_graph = allocator.create_mapping_and_full_build_hef(
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 568, in create_mapping_and_full_build_hef
        self.call_builder(network_graph_path, output_path, compilation_output_proto=compilation_output_proto,
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 527, in call_builder
        self.run_builder(network_graph_path, output_path, **kwargs)
      File "/home/user/Documents/programs/hailo/2022_09/hailo_sw_suite/hailo_venv/lib/python3.8/site-packages/hailo_sdk_client/allocator/hailo_tools_runner.py", line 394, in run_builder
        raise e.internal_exception("Hailo tools builder failed:", hailo_tools_error=e.hailo_tools_error) from None
    hailo_sdk_client.sdk_backend.sdk_backend_exceptions.BackendAllocatorException: Hailo tools builder failed: Model uses too many reources: 136 Layer-Controllers
    

    If I train the model in the hailo yolov5 retraining docker container, the compilation works fine. Any idea what this error means?

    opened by frenky-strasak 3
  • Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Is there a specific implementation limit ? (multitasking models, cascaded models, or large models)

    Hi, I have not applied through the developer zone account yet (will it be difficult to apply for all-pass?). I wonder if the Haio-8 chip can run some large models at the same time? Or can you tell me what the implementation limits are? E.g:

    1. Operator compatibility (highest opset version supported by ONNX, or better than OpenVINO in general?)
    2. What is the memory size of the chip that can store/compute tensors? Can it run super-resolution models with higher output resolutions? Can the chip run very wide fully connected layers?
    3. Hailo-8 can multitask, if I run keypoint detection, ReID and depth estimation at the same time with frame skipping, is the chip's computing or memory capacity overloaded? How to spot where the overload is or estimate it?
    4. Has your team considered having multiple Hailo-8 chips "chained" to run some difficult tasks? This should be super cool.
    opened by BICHENG 2
  • Dataflow Compiler v3.17.0 not available in Developer Zone

    Dataflow Compiler v3.17.0 not available in Developer Zone

    Hi, in the latest changelog update you mentioned, that the repository was updated to use the Dataflow Compiler v3.17.0. However, in the Developer Zone only version 3.16.0 is available. How can we get the latest Dataflow Compiler v3.17.0? Can you please add it in the Developer Zone?

    opened by kmaerkl 2
  • Old version of yolov5 in retraining docker container.

    Old version of yolov5 in retraining docker container.

    Hi, for what reason does the retraining docker container contain the old version of yolov5 v2.0? Is possible to use some new versions of yolov5 such as v6.2? Are these newer versions of yolov5 compatible with data flow compiler to optimize and compile models to hailo hef files? Thanks!

    opened by frenky-strasak 1
  • YoloV7-tiny with on-chip NMS

    YoloV7-tiny with on-chip NMS

    Dear Hailo, the output structure of Yolov5 and Yolov7 is the same IIRC, so it should be possible to run the NMS on-chip. I wanted to test this, so I took the yolov5xs_wo_spp_nms model of this zoo as a reference. When downloading, I get this NMS config JSON:

    {
      "nms_scores_th": 0.01,
      "nms_iou_th": 1.0,
      "image_dims": [512, 512],
      "max_proposals_per_class": 80,
      "background_removal": false,
      "input_division_factor": 8,
      "classes": 80,
      "bbox_decoders": [
          {
              "name": "bbox_decoder53",
              "w": [
                  10,
                  16,
                  33
              ],
              "h": [
                  13,
                  30,
                  23
              ],
              "stride": 8,
              "encoded_layer": "conv53"
          },
          {
              "name": "bbox_decoder61",
              "w": [
                  30,
                  62,
                  59
              ],
              "h": [
                  61,
                  45,
                  119
              ],
              "stride": 16,
              "encoded_layer": "conv61"
          },
          {
              "name": "bbox_decoder69",
              "w": [
                  116,
                  156,
                  373 
              ],
              "h": [
                  90,
                  198,
                  326
              ],
              "stride": 32,
              "encoded_layer": "conv69"
          }
      ]
    }
    

    I cannot find a description of these parameters in the documentation anywhere. While I understand some parameters, like the names and anchors (w, h, stride, etc.) I dont get these ones:

      "background_removal": false,
      "input_division_factor": 8,
    

    Can you help me with these parameters? And did you ever test a yolov7 and on-chip decode/nms? Furhter, in the yolov5xs alls-file, there have been these settings made:

    buffers(proposal_generator0, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator1, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator2, proposal_generator0_concat, 2, FULL_ROW)
    buffers(proposal_generator0_concat, nms1, 2)
    

    which I don't really understand. Could you explain the usage of those as well? thanks!

    Cheers

    opened by dnns92 1
  • CVE-2007-4559 Patch

    CVE-2007-4559 Patch

    Patching CVE-2007-4559

    Hi, we are security researchers from the Advanced Research Center at Trellix. We have began a campaign to patch a widespread bug named CVE-2007-4559. CVE-2007-4559 is a 15 year old bug in the Python tarfile package. By using extract() or extractall() on a tarfile object without sanitizing input, a maliciously crafted .tar file could perform a directory path traversal attack. We found at least one unsantized extractall() in your codebase and are providing a patch for you via pull request. The patch essentially checks to see if all tarfile members will be extracted safely and throws an exception otherwise. We encourage you to use this patch or your own solution to secure against CVE-2007-4559. Further technical information about the vulnerability can be found in this blog.

    If you have further questions you may contact us through this projects lead researcher Kasimir Schulz.

    opened by TrellixVulnTeam 0
  • Reactangle hef model does not work

    Reactangle hef model does not work

    Hi, in the yolov5 retraining container (v.2) I have exported yolov5m_wo_spp.pt to ONNX with rectangular shape: python models/export.py --weights model.pt --img 352 640 --batch 1 (--img H W)

    Then I compiled this onnx model: hailomz compile --ckpt model.onnx --calib-path calib_dataset --yaml yolov5m_wo_spp.yaml where I changed the yolov5m_wo_spp.yaml like this:

    • I added the preprocessing part with corresponding shape:
    preprocessing:
      network_type: detection
      input_shape:
      - 352
      - 640
      - 3
    
    • and I changed the info (the output shapes were found by Netron tool)
    info:
      input_shape: 352x640x3
      output_shape: 11x20x18, 22x40x18, 44x80x18
    

    The complete file is here: yolov5m_wo_spp.zip

    The compilation looks fine. When I deploy the hef file to my pipeline I can see wrong bboxes which are doubled and shifted. But they move in the same way as the object to be detected.

    What do I miss? Should I also modify the yolov5m_wo_spp.alls file? Could you point me please? Thank you!

    opened by frenky-strasak 3
  • Fix yolo postprocessing when batches are used

    Fix yolo postprocessing when batches are used

    Hi,

    currently the yolo postprocessing is not working when batches are provided. I fixed the bug in this branch: https://github.com/DavidBecht/hailo_model_zoo

    BR

    opened by DavidBecht 0
  • Illegal instruction (core dumped)

    Illegal instruction (core dumped)

    Hi,

    hailomz gives below error all the time. (in docker)

    all other command and tappas is working without any problem

    hailomz -h Illegal instruction (core dumped)

    opened by MyraBaba 1
  • when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    when I run the hef , there is something wrong . how to compile mode to hef in ARM arch.

    [HailoRT] [error] CHECK failed - Failed opening non-compatible HEF with the following unsupported extensions: KO Run ASAP (KO_RUN_ASAP) [HailoRT] [error] CHECK_SUCCESS failed with status=26 [HailoRT] [error] Failed parsing HEF file [HailoRT] [error] Failed creating HEF [HailoRT] [error] CHECK_EXPECTED failed with status=26

    opened by riverfrank 2
  • Post-processing yolov5s personface output

    Post-processing yolov5s personface output

    Hi,

    As a result of inference of the yolov5s_personface model I get 3 vectors of dimensions [1, 40, 40, 21], [1, 20, 20, 21], [1, 80, 80, 21]; what's the correct/fastest procedure to decode them in order to get a list of detections (such as [x_min, y_min, x_max, y_max, score, class])?

    Thanks

    opened by aux82716 6
Releases(v2.5)
Owner
Hailo
Hailo
deep_image_prior_extension

Code for "Is Deep Image Prior in Need of a Good Education?" Project page: https://jleuschn.github.io/docs.educated_deep_image_prior/. Supplementary Ma

riccardo barbano 7 Jan 09, 2022
A python code to convert Keras pre-trained weights to Pytorch version

Weights_Keras_2_Pytorch 最近想在Pytorch项目里使用一下谷歌的NIMA,但是发现没有预训练好的pytorch权重,于是整理了一下将Keras预训练权重转为Pytorch的代码,目前是支持Keras的Conv2D, Dense, DepthwiseConv2D, Batch

Liu Hengyu 2 Dec 16, 2021
Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators

Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators. It's also a suite of learning algorithms to train agents to operate in these enviro

Google 1.5k Jan 02, 2023
Unofficial implementation of "Coordinate Attention for Efficient Mobile Network Design"

Unofficial implementation of "Coordinate Attention for Efficient Mobile Network Design". CoordAttention tensorflow slim

Billy 9 Aug 22, 2022
Simple machine learning library / 簡單易用的機器學習套件

FukuML Simple machine learning library / 簡單易用的機器學習套件 Installation $ pip install FukuML Tutorial Lesson 1: Perceptron Binary Classification Learning Al

Fukuball Lin 279 Sep 15, 2022
Load What You Need: Smaller Multilingual Transformers for Pytorch and TensorFlow 2.0.

Smaller Multilingual Transformers This repository shares smaller versions of multilingual transformers that keep the same representations offered by t

Geotrend 79 Dec 28, 2022
Code for our paper "Sematic Representation for Dialogue Modeling" in ACL2021

AMR-Dialogue An implementation for paper "Semantic Representation for Dialogue Modeling". You may find our paper here. Requirements python 3.6 pytorch

xfbai 45 Dec 26, 2022
Random Erasing Data Augmentation. Experiments on CIFAR10, CIFAR100 and Fashion-MNIST

Random Erasing Data Augmentation =============================================================== black white random This code has the source code for

Zhun Zhong 654 Dec 26, 2022
Open-sourcing the Slates Dataset for recommender systems research

FINN.no Recommender Systems Slate Dataset This repository accompany the paper "Dynamic Slate Recommendation with Gated Recurrent Units and Thompson Sa

FINN.no 48 Nov 28, 2022
NeRD: Neural Reflectance Decomposition from Image Collections

NeRD: Neural Reflectance Decomposition from Image Collections Project Page | Video | Paper | Dataset Implementation for NeRD. A novel method which dec

Computergraphics (University of Tübingen) 195 Dec 29, 2022
Colour detection is necessary to recognize objects, it is also used as a tool in various image editing and drawing apps.

Colour Detection On Image Colour detection is the process of detecting the name of any color. Simple isn’t it? Well, for humans this is an extremely e

Astitva Veer Garg 1 Jan 13, 2022
🔥 Cogitare - A Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python

Cogitare is a Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python. A friendly interface for beginners and a powerful too

Cogitare - Modern and Easy Deep Learning with Python 76 Sep 30, 2022
exponential adaptive pooling for PyTorch

AdaPool: Exponential Adaptive Pooling for Information-Retaining Downsampling Abstract Pooling layers are essential building blocks of Convolutional Ne

Alexandros Stergiou 55 Jan 04, 2023
Open source Python module for computer vision

About PCV PCV is a pure Python library for computer vision based on the book "Programming Computer Vision with Python" by Jan Erik Solem. More details

Jan Erik Solem 1.9k Jan 06, 2023
ML model to classify between cats and dogs

Cats-and-dogs-classifier This is my first ML model which can classify between cats and dogs. Here the accuracy is around 75%, however , the accuracy c

Sharath V 4 Aug 20, 2021
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

ObjProp Introduction This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Insta

Anirudh S Chakravarthy 6 May 03, 2022
Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)"

BAM and CBAM Official PyTorch code for "BAM: Bottleneck Attention Module (BMVC2018)" and "CBAM: Convolutional Block Attention Module (ECCV2018)" Updat

Jongchan Park 1.7k Jan 01, 2023
This is an official PyTorch implementation of Task-Adaptive Neural Network Search with Meta-Contrastive Learning (NeurIPS 2021, Spotlight).

NeurIPS 2021 (Spotlight): Task-Adaptive Neural Network Search with Meta-Contrastive Learning This is an official PyTorch implementation of Task-Adapti

Wonyong Jeong 15 Nov 21, 2022
Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning

T2I_CL This is the official Pytorch implementation of the paper Improving Text-to-Image Synthesis Using Contrastive Learning Requirements Linux Python

42 Dec 31, 2022