Semi-automated OpenVINO benchmark_app with variable parameters

Overview

Semi-automated OpenVINO benchmark_app with variable parameters

Description

This program allows the users to specify variable parameters in the OpenVINO benchmark_app and run the benchmark with all combinations of the given parameters automatically.
The program will generate the report file in the CSV format with coded date and time file name ('result_DDmm-HHMMSS.csv'). You can analyze or visualize the benchmark result with MS Excel or a spreadsheet application.

The program is just a front-end for the OpenVINO official benchmark_app.
This program utilizes the benchmark_app as the benchmark core logic. So the performance result measured by this program must be consistent with the one measured by the benchmark_app.
Also, the command line parameters and their meaning are compatible with the benchmark_app.

Requirements

  • OpenVINO 2022.1 or higher
    This program is not compatible with OpenVINO 2021.

How to run

  1. Install required Python modules.
python -m pip install --upgrade pip setuptools
python -m pip install -r requirements.txt
  1. Run the auto benchmark (command line example)
python auto_benchmark_app.py -m resnet.xml -niter 100 -nthreads %1,2,4,8 -nstreams %1,2 -d %CPU,GPU -cdir cache

With this command line, -nthreads has 4 options (1,2,4,8), -nstreams has 2 options (1,2), and -d option has 2 options (CPU,GPU). As the result, 16 (4x2x2) benchmarks will be performed in total.

Parameter options

You can specify variable parameters by adding following prefix to the parameters.

Prefix Type Description/Example
$ range $1,8,2 == range(1,8,2) => [1,3,5,7]
All range() compatible expressions are possible. e.g. $1,5 or $5,1,-1
% list %CPU,GPU => ['CPU', 'GPU'], %1,2,4,8 => [1,2,4,8]
@ ir-models @models == IR models in the './models' dir => ['resnet.xml', 'googlenet.xml', ...]
This option will recursively search the '.xml' files in the specified directory.

Examples of command line

python auto_benchmark_app.py -cdir cache -m resnet.xml -nthreads $1,5 -nstreams %1,2,4,8 -d %CPU,GPU

  • Run benchmark with -nthreads=range(1,5)=[1,2,3,4], -nstreams=[1,2,4,8], -d=['CPU','GPU']. Total 32 combinations.

python auto_benchmark_app.py -m @models -niter 100 -nthreads %1,2,4,8 -nstreams %1,2 -d CPU -cdir cache

  • Run benchmark with -m=[all .xml files in models directory], -nthreads = [1,2,4,8], -nstreams=[1,2].

Example of a result file

The last 4 items in each line are the performance data in the order of 'count', 'duration (ms)', 'latency AVG (ms)', and 'throughput (fps)'.

#CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz
#MEM: 33947893760
#OS: Windows-10-10.0.22000-SP0
#OpenVINO: 2022.1.0-7019-cdb9bec7210-releases/2022/1
#Last 4 items in the lines : test count, duration (ms), latency AVG (ms), and throughput (fps)
benchmark_app.py,-m,models\FP16\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,772.55,30.20,129.44
benchmark_app.py,-m,models\FP16\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,1917.62,75.06,52.15
benchmark_app.py,-m,models\FP16\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,195.28,7.80,512.10
benchmark_app.py,-m,models\FP16-INT8\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,337.09,24.75,308.53
benchmark_app.py,-m,models\FP16-INT8\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,1000.39,38.85,99.96
benchmark_app.py,-m,models\FP16-INT8\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,64.22,4.69,1619.38
benchmark_app.py,-m,models\FP32\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,778.90,30.64,128.39
benchmark_app.py,-m,models\FP32\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,1949.73,76.91,51.29
benchmark_app.py,-m,models\FP32\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,182.59,7.58,547.69
benchmark_app.py,-m,models\FP32-INT8\googlenet-v1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,331.73,24.90,313.51
benchmark_app.py,-m,models\FP32-INT8\resnet-50-tf.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,100,968.38,38.45,103.27
benchmark_app.py,-m,models\FP32-INT8\squeezenet1.1.xml,-niter,100,-nthreads,1,-nstreams,1,-d,CPU,-cdir,cache,104,67.70,5.04,1536.23
benchmark_app.py,-m,models\FP16\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1536.14,15.30,65.10
benchmark_app.py,-m,models\FP16\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,3655.59,36.50,27.36
benchmark_app.py,-m,models\FP16\squeezenet1.1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,366.73,3.68,272.68
benchmark_app.py,-m,models\FP16-INT8\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,872.87,8.66,114.56
benchmark_app.py,-m,models\FP16-INT8\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1963.67,19.54,50.93
benchmark_app.py,-m,models\FP16-INT8\squeezenet1.1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,242.28,2.34,412.74
benchmark_app.py,-m,models\FP32\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1506.14,14.96,66.39
benchmark_app.py,-m,models\FP32\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,3593.88,35.88,27.83
benchmark_app.py,-m,models\FP32\squeezenet1.1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,366.28,3.56,273.01
benchmark_app.py,-m,models\FP32-INT8\googlenet-v1.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,876.52,8.69,114.09
benchmark_app.py,-m,models\FP32-INT8\resnet-50-tf.xml,-niter,100,-nthreads,2,-nstreams,1,-d,CPU,-cdir,cache,100,1934.72,19.25,51.69

END

Owner
Yasunori Shimura
Yasunori Shimura
Pytorch reimplementation of the Mixer (MLP-Mixer: An all-MLP Architecture for Vision)

MLP-Mixer Pytorch reimplementation of Google's repository for the MLP-Mixer (Not yet updated on the master branch) that was released with the paper ML

Eunkwang Jeon 18 Dec 08, 2022
FS2KToolbox FS2K Dataset Towards the translation between Face

FS2KToolbox FS2K Dataset Towards the translation between Face -- Sketch. Download (photo+sketch+annotation): Google-drive, Baidu-disk, pw: FS2K. For

Deng-Ping Fan 5 Jan 03, 2023
Code for the paper "Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are in envir

Michael Janner 269 Jan 05, 2023
Hand tracking demo for DIY Smart Glasses with a remote computer doing the work

CameraStream This is a demonstration that streams the image from smartglasses to a pc, does the hand recognition on the remote pc and streams the proc

Teemu Laurila 20 Oct 13, 2022
ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

ReConsider ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin

Facebook Research 47 Jul 26, 2022
PyTorch code of paper "LiVLR: A Lightweight Visual-Linguistic Reasoning Framework for Video Question Answering"

LiVLR-VideoQA We propose a Lightweight Visual-Linguistic Reasoning framework (LiVLR) for VideoQA. The overview of LiVLR: Evaluation on MSRVTT-QA Datas

JJ Jiang 7 Dec 30, 2022
Build an Amazon SageMaker Pipeline to Transform Raw Texts to A Knowledge Graph

Build an Amazon SageMaker Pipeline to Transform Raw Texts to A Knowledge Graph This repository provides a pipeline to create a knowledge graph from ra

AWS Samples 3 Jan 01, 2022
(AAAI 2021) Progressive One-shot Human Parsing

End-to-end One-shot Human Parsing This is the official repository for our two papers: Progressive One-shot Human Parsing (AAAI 2021) End-to-end One-sh

54 Dec 30, 2022
A PyTorch implementation of "Signed Graph Convolutional Network" (ICDM 2018).

SGCN ⠀ A PyTorch implementation of Signed Graph Convolutional Network (ICDM 2018). Abstract Due to the fact much of today's data can be represented as

Benedek Rozemberczki 251 Nov 30, 2022
Code for the ICCV2021 paper "Personalized Image Semantic Segmentation"

PSS: Personalized Image Semantic Segmentation Paper PSS: Personalized Image Semantic Segmentation Yu Zhang, Chang-Bin Zhang, Peng-Tao Jiang, Ming-Ming

张宇 15 Jul 09, 2022
Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021)

Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular Depth Estimation in the Dark (ICCV 2021) Kun Wang, Zhenyu Zhang, Zhiqiang Yan, X

kunwang 66 Nov 24, 2022
Covid19-Forecasting - An interactive website that tracks, models and predicts COVID-19 Cases

Covid-Tracker This is an interactive website that tracks, models and predicts CO

Adam Lahmadi 1 Feb 01, 2022
An implementation of the paper "A Neural Algorithm of Artistic Style"

A Neural Algorithm of Artistic Style implementation - Neural Style Transfer This is an implementation of the research paper "A Neural Algorithm of Art

Srijarko Roy 27 Sep 20, 2022
Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr\"om Method (NeurIPS 2021)

Skyformer This repository is the official implementation of Skyformer: Remodel Self-Attention with Gaussian Kernel and Nystr"om Method (NeurIPS 2021).

Qi Zeng 46 Sep 20, 2022
ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021

ManipNet: Neural Manipulation Synthesis with a Hand-Object Spatial Representation - SIGGRAPH 2021 Dataset Code Demos Authors: He Zhang, Yuting Ye, Tak

HE ZHANG 194 Dec 06, 2022
DECAF: Deep Extreme Classification with Label Features

DECAF DECAF: Deep Extreme Classification with Label Features @InProceedings{Mittal21, author = "Mittal, A. and Dahiya, K. and Agrawal, S. and Sain

46 Nov 06, 2022
Official codes for the paper "Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech"

ResDAVEnet-VQ Official PyTorch implementation of Learning Hierarchical Discrete Linguistic Units from Visually-Grounded Speech What is in this repo? M

Wei-Ning Hsu 21 Aug 23, 2022
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
LeafSnap replicated using deep neural networks to test accuracy compared to traditional computer vision methods.

Deep-Leafsnap Convolutional Neural Networks have become largely popular in image tasks such as image classification recently largely due to to Krizhev

Sujith Vishwajith 48 Nov 27, 2022