Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

Overview

scc4onnx

Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

https://github.com/PINTO0309/simple-onnx-processing-tools

Downloads GitHub PyPI CodeQL

Key concept

  • Allow the user to specify the name of the input OP to change the input order.
  • All number of dimensions can be freely changed, not only 4 dimensions such as NCHW and NHWC.
  • Simply rewrite the input order of the input OP to the specified order and extrapolate Transpose after the input OP so that it does not affect the processing of subsequent OPs.
  • Allows the user to change the channel order of RGB and BGR by specifying options.

1. Setup

1-1. HostPC

### option
$ echo export PATH="~/.local/bin:$PATH" >> ~/.bashrc \
&& source ~/.bashrc

### run
$ pip install -U onnx \
&& python3 -m pip install -U onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com \
&& pip install -U scc4onnx

1-2. Docker

### docker pull
$ docker pull pinto0309/scc4onnx:latest

### docker build
$ docker build -t pinto0309/scc4onnx:latest .

### docker run
$ docker run --rm -it -v `pwd`:/workdir pinto0309/scc4onnx:latest
$ cd /workdir

2. CLI Usage

$ scc4onnx -h

usage:
  scc4onnx [-h]
  --input_onnx_file_path INPUT_ONNX_FILE_PATH
  --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
  [--input_op_names_and_order_dims INPUT_OP_NAME ORDER_DIM]
  [--channel_change_inputs INPUT_OP_NAME DIM]
  [--non_verbose]

optional arguments:
  -h, --help
      show this help message and exit

  --input_onnx_file_path INPUT_ONNX_FILE_PATH
      Input onnx file path.

  --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
      Output onnx file path.

  --input_op_names_and_order_dims INPUT_OP_NAME ORDER_DIM
      Specify the name of the input_op to be dimensionally changed and the order of the
      dimensions after the change.
      The name of the input_op to be dimensionally changed can be specified multiple times.

      e.g.
      --input_op_names_and_order_dims aaa [0,3,1,2] \
      --input_op_names_and_order_dims bbb [0,2,3,1] \
      --input_op_names_and_order_dims ccc [0,3,1,2,4,5]

  --channel_change_inputs INPUT_OP_NAME DIM
      Change the channel order of RGB and BGR.
      If the original model is RGB, it is transposed to BGR.
      If the original model is BGR, it is transposed to RGB.
      It can be selectively specified from among the OP names specified
      in --input_op_names_and_order_dims.
      OP names not specified in --input_op_names_and_order_dims are ignored.
      Multiple times can be specified as many times as the number of OP names specified
      in --input_op_names_and_order_dims.
      --channel_change_inputs op_name dimension_number_representing_the_channel
      dimension_number_representing_the_channel must specify the dimension position before
      the change in input_op_names_and_order_dims.
      For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.

      e.g.
      --channel_change_inputs aaa 3 \
      --channel_change_inputs bbb 1 \
      --channel_change_inputs ccc 5

  --non_verbose
      Do not show all information logs. Only error logs are displayed.

3. In-script Usage

$ python
>>> from scc4onnx import order_conversion
>>> help(order_conversion)
Help on function order_conversion in module scc4onnx.onnx_input_order_converter:

order_conversion(
  input_op_names_and_order_dims: Union[dict, NoneType] = None,
  channel_change_inputs: Union[dict, NoneType] = None,
  input_onnx_file_path: Union[str, NoneType] = '',
  output_onnx_file_path: Union[str, NoneType] = '',
  onnx_graph: Union[onnx.onnx_ml_pb2.ModelProto, NoneType] = None,
  non_verbose: Union[bool, NoneType] = False
) -> onnx.onnx_ml_pb2.ModelProto

    Parameters
    ----------
    input_onnx_file_path: Optional[str]
        Input onnx file path.
        Either input_onnx_file_path or onnx_graph must be specified.
    
    output_onnx_file_path: Optional[str]
        Output onnx file path.
        If output_onnx_file_path is not specified, no .onnx file is output.
    
    onnx_graph: Optional[onnx.ModelProto]
        onnx.ModelProto.
        Either input_onnx_file_path or onnx_graph must be specified.
        onnx_graph If specified, ignore input_onnx_file_path and process onnx_graph.
    
    input_op_names_and_order_dims: Optional[dict]
        Specify the name of the input_op to be dimensionally changed and
        the order of the dimensions after the change.
        The name of the input_op to be dimensionally changed
        can be specified multiple times.
    
        e.g.
        input_op_names_and_order_dims = {
            "input_op_name1": [0,3,1,2],
            "input_op_name2": [0,2,3,1],
            "input_op_name3": [0,3,1,2,4,5],
        }
    
    channel_change_inputs: Optional[dict]
        Change the channel order of RGB and BGR.
        If the original model is RGB, it is transposed to BGR.
        If the original model is BGR, it is transposed to RGB.
        It can be selectively specified from among the OP names
        specified in input_op_names_and_order_dims.
        OP names not specified in input_op_names_and_order_dims are ignored.
        Multiple times can be specified as many times as the number
        of OP names specified in input_op_names_and_order_dims.
        channel_change_inputs = {"op_name": dimension_number_representing_the_channel}
        dimension_number_representing_the_channel must specify
        the dimension position after the change in input_op_names_and_order_dims.
        For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.
    
        e.g.
        channel_change_inputs = {
            "aaa": 1,
            "bbb": 3,
            "ccc": 2,
        }
    
    non_verbose: Optional[bool]
        Do not show all information logs. Only error logs are displayed.
        Default: False
    
    Returns
    -------
    order_converted_graph: onnx.ModelProto
        Order converted onnx ModelProto

4. CLI Execution

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1] \
--channel_change_inputs left 1 \
--channel_change_inputs right 1

5. In-script Execution

from scc4onnx import order_conversion

order_converted_graph = order_conversion(
    onnx_graph=graph,
    input_op_names_and_order_dims={"left": [0,2,3,1], "right": [0,2,3,1]},
    channel_change_inputs={"left": 1, "right": 1},
    non_verbose=True,
)

6. Sample

6-1. Transpose only

image

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1]

image image

6-2. Transpose + RGB<->BGR

image

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1] \
--channel_change_inputs left 1 \
--channel_change_inputs right 1

image

6-3. RGB<->BGR only

image

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--channel_change_inputs left 1 \
--channel_change_inputs right 1

image

7. Issues

https://github.com/PINTO0309/simple-onnx-processing-tools/issues

You might also like...
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for prediction.

Predicitng_viability Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for

Python project to take sound as input and output as RGB + Brightness values suitable for DMX

sound-to-light Python project to take sound as input and output as RGB + Brightness values suitable for DMX Current goals: Get one pixel working: Vary

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runtime Web.

A repository that shares tuning results of trained models generated by TensorFlow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. TensorFlow Lite. OpenVINO. CoreML. TensorFlow.js. TF-TRT. MediaPipe. ONNX. [.tflite,.h5,.pb,saved_model,tfjs,tftrt,mlmodel,.xml/.bin, .onnx] ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Releases(1.0.5)
  • 1.0.5(Sep 9, 2022)

    • Add short form parameter
      $ scc4onnx -h
      
      usage:
        scc4onnx [-h]
        -if INPUT_ONNX_FILE_PATH
        -of OUTPUT_ONNX_FILE_PATH
        [-ioo INPUT_OP_NAME ORDER_DIM]
        [-cci INPUT_OP_NAME DIM]
        [-n]
      
      optional arguments:
        -h, --help
            show this help message and exit
      
        -if INPUT_ONNX_FILE_PATH, --input_onnx_file_path INPUT_ONNX_FILE_PATH
            Input onnx file path.
      
        -of OUTPUT_ONNX_FILE_PATH, --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
            Output onnx file path.
      
        -ioo INPUT_OP_NAMES_AND_ORDER_DIMS INPUT_OP_NAMES_AND_ORDER_DIMS, --input_op_names_and_order_dims INPUT_OP_NAMES_AND_ORDER_DIMS INPUT_OP_NAMES_AND_ORDER_DIMS
            Specify the name of the input_op to be dimensionally changed and the order of the
            dimensions after the change.
            The name of the input_op to be dimensionally changed can be specified multiple times.
      
            e.g.
            --input_op_names_and_order_dims aaa [0,3,1,2] \
            --input_op_names_and_order_dims bbb [0,2,3,1] \
            --input_op_names_and_order_dims ccc [0,3,1,2,4,5]
      
        -cci CHANNEL_CHANGE_INPUTS CHANNEL_CHANGE_INPUTS, --channel_change_inputs CHANNEL_CHANGE_INPUTS CHANNEL_CHANGE_INPUTS
            Change the channel order of RGB and BGR.
            If the original model is RGB, it is transposed to BGR.
            If the original model is BGR, it is transposed to RGB.
            It can be selectively specified from among the OP names specified
            in --input_op_names_and_order_dims.
            OP names not specified in --input_op_names_and_order_dims are ignored.
            Multiple times can be specified as many times as the number of OP names specified
            in --input_op_names_and_order_dims.
            --channel_change_inputs op_name dimension_number_representing_the_channel
            dimension_number_representing_the_channel must specify the dimension position before
            the change in input_op_names_and_order_dims.
            For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.
      
            e.g.
            --channel_change_inputs aaa 3 \
            --channel_change_inputs bbb 1 \
            --channel_change_inputs ccc 5
      
        -n, --non_verbose
            Do not show all information logs. Only error logs are displayed.
      

    Full Changelog: https://github.com/PINTO0309/scc4onnx/compare/1.0.4...1.0.5

    Source code(tar.gz)
    Source code(zip)
  • 1.0.4(May 25, 2022)

  • 1.0.3(May 15, 2022)

  • 1.0.2(May 10, 2022)

  • 1.0.1(Apr 19, 2022)

  • 1.0.0(Apr 18, 2022)

Owner
Katsuya Hyodo
Hobby programmer. Intel Software Innovator Program member.
Katsuya Hyodo
No-reference Image Quality Assessment(NIQA) Algorithms (BRISQUE, NIQE, PIQE, RankIQA, MetaIQA)

No-Reference Image Quality Assessment Algorithms No-reference Image Quality Assessment(NIQA) is a task of evaluating an image without a reference imag

Dae-Young Song 26 Jan 04, 2023
NeuralDiff: Segmenting 3D objects that move in egocentric videos

NeuralDiff: Segmenting 3D objects that move in egocentric videos Project Page | Paper + Supplementary | Video About This repository contains the offic

Vadim Tschernezki 14 Dec 05, 2022
Continual World is a benchmark for continual reinforcement learning

Continual World Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld. Th

41 Dec 24, 2022
Image Restoration Using Swin Transformer for VapourSynth

SwinIR SwinIR function for VapourSynth, based on https://github.com/JingyunLiang/SwinIR. Dependencies NumPy PyTorch, preferably with CUDA. Note that t

Holy Wu 11 Jun 19, 2022
Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral]

Learning to Disambiguate Strongly Interacting Hands via Probabilistic Per-Pixel Part Segmentation [3DV 2021 Oral] Learning to Disambiguate Strongly In

Zicong Fan 40 Dec 22, 2022
A coin flip game in which you can put the amount of money below or equal to 1000 and then choose heads or tail

COIN_FLIPPY ##This is a simple example package. You can use Github-flavored Markdown to write your content. Coinflippy A coin flip game in which you c

2 Dec 26, 2021
On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks

On Size-Oriented Long-Tailed Graph Classification of Graph Neural Networks We provide the code (in PyTorch) and datasets for our paper "On Size-Orient

Zemin Liu 4 Jun 18, 2022
Open source repository for the code accompanying the paper 'PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations'.

PatchNets This is the official repository for the project "PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations". For details,

16 May 22, 2022
Active learning for Mask R-CNN in Detectron2

MaskAL - Active learning for Mask R-CNN in Detectron2 Summary MaskAL is an active learning framework that automatically selects the most-informative i

49 Dec 20, 2022
Cobalt Strike teamserver detection.

Cobalt-Strike-det Cobalt Strike teamserver detection. usage: cobaltstrike_verify.py [-l TARGETS] [-t THREADS] optional arguments: -h, --help show this

TimWhite 17 Sep 27, 2022
Weakly Supervised Dense Event Captioning in Videos, i.e. generating multiple sentence descriptions for a video in a weakly-supervised manner.

WSDEC This is the official repo for our NeurIPS paper Weakly Supervised Dense Event Captioning in Videos. Description Repo directories ./: global conf

Melon(Xuguang Duan) 96 Nov 01, 2022
audioLIME: Listenable Explanations Using Source Separation

audioLIME This repository contains the Python package audioLIME, a tool for creating listenable explanations for machine learning models in music info

Institute of Computational Perception 27 Dec 01, 2022
Dynamic Token Normalization Improves Vision Transformers

Dynamic Token Normalization Improves Vision Transformers This is the PyTorch implementation of the paper Dynamic Token Normalization Improves Vision T

Wenqi Shao 20 Oct 09, 2022
Fashion Landmark Estimation with HRNet

HRNet for Fashion Landmark Estimation (Modified from deep-high-resolution-net.pytorch) Introduction This code applies the HRNet (Deep High-Resolution

SVIP Lab 91 Dec 26, 2022
Telegram chatbot created with deep learning model (LSTM) and telebot library.

Telegram chatbot Telegram chatbot created with deep learning model (LSTM) and telebot library. Description This program will allow you to create very

1 Jan 04, 2022
[ICRA 2022] CaTGrasp: Learning Category-Level Task-Relevant Grasping in Clutter from Simulation

This is the official implementation of our paper: Bowen Wen, Wenzhao Lian, Kostas Bekris, and Stefan Schaal. "CaTGrasp: Learning Category-Level Task-R

Bowen Wen 199 Jan 04, 2023
Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis

WASP2 (Currently in pre-development): Allele-specific pipeline for unbiased read mapping(WIP), QTL discovery(WIP), and allelic-imbalance analysis Requ

McVicker Lab 2 Aug 11, 2022
Official code for paper "Optimization for Oriented Object Detection via Representation Invariance Loss".

Optimization for Oriented Object Detection via Representation Invariance Loss By Qi Ming, Zhiqiang Zhou, Lingjuan Miao, Xue Yang, and Yunpeng Dong. Th

ming71 56 Nov 28, 2022
Fast, differentiable sorting and ranking in PyTorch

Torchsort Fast, differentiable sorting and ranking in PyTorch. Pure PyTorch implementation of Fast Differentiable Sorting and Ranking (Blondel et al.)

Teddy Koker 655 Jan 04, 2023
[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.

LBYL-Net This repo implements paper Look Before You Leap: Learning Landmark Features For One-Stage Visual Grounding CVPR 2021. Getting Started Prerequ

SVIP Lab 45 Dec 12, 2022