Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

Overview

scc4onnx

Very simple NCHW and NHWC conversion tool for ONNX. Change to the specified input order for each and every input OP. Also, change the channel order of RGB and BGR. Simple Channel Converter for ONNX.

https://github.com/PINTO0309/simple-onnx-processing-tools

Downloads GitHub PyPI CodeQL

Key concept

  • Allow the user to specify the name of the input OP to change the input order.
  • All number of dimensions can be freely changed, not only 4 dimensions such as NCHW and NHWC.
  • Simply rewrite the input order of the input OP to the specified order and extrapolate Transpose after the input OP so that it does not affect the processing of subsequent OPs.
  • Allows the user to change the channel order of RGB and BGR by specifying options.

1. Setup

1-1. HostPC

### option
$ echo export PATH="~/.local/bin:$PATH" >> ~/.bashrc \
&& source ~/.bashrc

### run
$ pip install -U onnx \
&& python3 -m pip install -U onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com \
&& pip install -U scc4onnx

1-2. Docker

### docker pull
$ docker pull pinto0309/scc4onnx:latest

### docker build
$ docker build -t pinto0309/scc4onnx:latest .

### docker run
$ docker run --rm -it -v `pwd`:/workdir pinto0309/scc4onnx:latest
$ cd /workdir

2. CLI Usage

$ scc4onnx -h

usage:
  scc4onnx [-h]
  --input_onnx_file_path INPUT_ONNX_FILE_PATH
  --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
  [--input_op_names_and_order_dims INPUT_OP_NAME ORDER_DIM]
  [--channel_change_inputs INPUT_OP_NAME DIM]
  [--non_verbose]

optional arguments:
  -h, --help
      show this help message and exit

  --input_onnx_file_path INPUT_ONNX_FILE_PATH
      Input onnx file path.

  --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
      Output onnx file path.

  --input_op_names_and_order_dims INPUT_OP_NAME ORDER_DIM
      Specify the name of the input_op to be dimensionally changed and the order of the
      dimensions after the change.
      The name of the input_op to be dimensionally changed can be specified multiple times.

      e.g.
      --input_op_names_and_order_dims aaa [0,3,1,2] \
      --input_op_names_and_order_dims bbb [0,2,3,1] \
      --input_op_names_and_order_dims ccc [0,3,1,2,4,5]

  --channel_change_inputs INPUT_OP_NAME DIM
      Change the channel order of RGB and BGR.
      If the original model is RGB, it is transposed to BGR.
      If the original model is BGR, it is transposed to RGB.
      It can be selectively specified from among the OP names specified
      in --input_op_names_and_order_dims.
      OP names not specified in --input_op_names_and_order_dims are ignored.
      Multiple times can be specified as many times as the number of OP names specified
      in --input_op_names_and_order_dims.
      --channel_change_inputs op_name dimension_number_representing_the_channel
      dimension_number_representing_the_channel must specify the dimension position before
      the change in input_op_names_and_order_dims.
      For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.

      e.g.
      --channel_change_inputs aaa 3 \
      --channel_change_inputs bbb 1 \
      --channel_change_inputs ccc 5

  --non_verbose
      Do not show all information logs. Only error logs are displayed.

3. In-script Usage

$ python
>>> from scc4onnx import order_conversion
>>> help(order_conversion)
Help on function order_conversion in module scc4onnx.onnx_input_order_converter:

order_conversion(
  input_op_names_and_order_dims: Union[dict, NoneType] = None,
  channel_change_inputs: Union[dict, NoneType] = None,
  input_onnx_file_path: Union[str, NoneType] = '',
  output_onnx_file_path: Union[str, NoneType] = '',
  onnx_graph: Union[onnx.onnx_ml_pb2.ModelProto, NoneType] = None,
  non_verbose: Union[bool, NoneType] = False
) -> onnx.onnx_ml_pb2.ModelProto

    Parameters
    ----------
    input_onnx_file_path: Optional[str]
        Input onnx file path.
        Either input_onnx_file_path or onnx_graph must be specified.
    
    output_onnx_file_path: Optional[str]
        Output onnx file path.
        If output_onnx_file_path is not specified, no .onnx file is output.
    
    onnx_graph: Optional[onnx.ModelProto]
        onnx.ModelProto.
        Either input_onnx_file_path or onnx_graph must be specified.
        onnx_graph If specified, ignore input_onnx_file_path and process onnx_graph.
    
    input_op_names_and_order_dims: Optional[dict]
        Specify the name of the input_op to be dimensionally changed and
        the order of the dimensions after the change.
        The name of the input_op to be dimensionally changed
        can be specified multiple times.
    
        e.g.
        input_op_names_and_order_dims = {
            "input_op_name1": [0,3,1,2],
            "input_op_name2": [0,2,3,1],
            "input_op_name3": [0,3,1,2,4,5],
        }
    
    channel_change_inputs: Optional[dict]
        Change the channel order of RGB and BGR.
        If the original model is RGB, it is transposed to BGR.
        If the original model is BGR, it is transposed to RGB.
        It can be selectively specified from among the OP names
        specified in input_op_names_and_order_dims.
        OP names not specified in input_op_names_and_order_dims are ignored.
        Multiple times can be specified as many times as the number
        of OP names specified in input_op_names_and_order_dims.
        channel_change_inputs = {"op_name": dimension_number_representing_the_channel}
        dimension_number_representing_the_channel must specify
        the dimension position after the change in input_op_names_and_order_dims.
        For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.
    
        e.g.
        channel_change_inputs = {
            "aaa": 1,
            "bbb": 3,
            "ccc": 2,
        }
    
    non_verbose: Optional[bool]
        Do not show all information logs. Only error logs are displayed.
        Default: False
    
    Returns
    -------
    order_converted_graph: onnx.ModelProto
        Order converted onnx ModelProto

4. CLI Execution

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1] \
--channel_change_inputs left 1 \
--channel_change_inputs right 1

5. In-script Execution

from scc4onnx import order_conversion

order_converted_graph = order_conversion(
    onnx_graph=graph,
    input_op_names_and_order_dims={"left": [0,2,3,1], "right": [0,2,3,1]},
    channel_change_inputs={"left": 1, "right": 1},
    non_verbose=True,
)

6. Sample

6-1. Transpose only

image

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1]

image image

6-2. Transpose + RGB<->BGR

image

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--input_op_names_and_order_dims left [0,2,3,1] \
--input_op_names_and_order_dims right [0,2,3,1] \
--channel_change_inputs left 1 \
--channel_change_inputs right 1

image

6-3. RGB<->BGR only

image

$ scc4onnx \
--input_onnx_file_path crestereo_next_iter2_240x320.onnx \
--output_onnx_file_path crestereo_next_iter2_240x320_ord.onnx \
--channel_change_inputs left 1 \
--channel_change_inputs right 1

image

7. Issues

https://github.com/PINTO0309/simple-onnx-processing-tools/issues

You might also like...
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.
A Planar RGB-D SLAM which utilizes Manhattan World structure to provide optimal camera pose trajectory while also providing a sparse reconstruction containing points, lines and planes, and a dense surfel-based reconstruction.

ManhattanSLAM Authors: Raza Yunus, Yanyan Li and Federico Tombari ManhattanSLAM is a real-time SLAM library for RGB-D cameras that computes the camera

Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for prediction.

Predicitng_viability Using Streamlit to host a multi-page tool with model specs and classification metrics, while also accepting user input values for

Python project to take sound as input and output as RGB + Brightness values suitable for DMX

sound-to-light Python project to take sound as input and output as RGB + Brightness values suitable for DMX Current goals: Get one pixel working: Vary

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

An executor that loads ONNX models and embeds documents using the ONNX runtime.

ONNXEncoder An executor that loads ONNX models and embeds documents using the ONNX runtime. Usage via Docker image (recommended) from jina import Flow

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS.

ONNX Runtime Web demo is an interactive demo portal showing real use cases running ONNX Runtime Web in VueJS. It currently supports four examples for you to quickly experience the power of ONNX Runtime Web.

A repository that shares tuning results of trained models generated by TensorFlow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. TensorFlow Lite. OpenVINO. CoreML. TensorFlow.js. TF-TRT. MediaPipe. ONNX. [.tflite,.h5,.pb,saved_model,tfjs,tftrt,mlmodel,.xml/.bin, .onnx] ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Releases(1.0.5)
  • 1.0.5(Sep 9, 2022)

    • Add short form parameter
      $ scc4onnx -h
      
      usage:
        scc4onnx [-h]
        -if INPUT_ONNX_FILE_PATH
        -of OUTPUT_ONNX_FILE_PATH
        [-ioo INPUT_OP_NAME ORDER_DIM]
        [-cci INPUT_OP_NAME DIM]
        [-n]
      
      optional arguments:
        -h, --help
            show this help message and exit
      
        -if INPUT_ONNX_FILE_PATH, --input_onnx_file_path INPUT_ONNX_FILE_PATH
            Input onnx file path.
      
        -of OUTPUT_ONNX_FILE_PATH, --output_onnx_file_path OUTPUT_ONNX_FILE_PATH
            Output onnx file path.
      
        -ioo INPUT_OP_NAMES_AND_ORDER_DIMS INPUT_OP_NAMES_AND_ORDER_DIMS, --input_op_names_and_order_dims INPUT_OP_NAMES_AND_ORDER_DIMS INPUT_OP_NAMES_AND_ORDER_DIMS
            Specify the name of the input_op to be dimensionally changed and the order of the
            dimensions after the change.
            The name of the input_op to be dimensionally changed can be specified multiple times.
      
            e.g.
            --input_op_names_and_order_dims aaa [0,3,1,2] \
            --input_op_names_and_order_dims bbb [0,2,3,1] \
            --input_op_names_and_order_dims ccc [0,3,1,2,4,5]
      
        -cci CHANNEL_CHANGE_INPUTS CHANNEL_CHANGE_INPUTS, --channel_change_inputs CHANNEL_CHANGE_INPUTS CHANNEL_CHANGE_INPUTS
            Change the channel order of RGB and BGR.
            If the original model is RGB, it is transposed to BGR.
            If the original model is BGR, it is transposed to RGB.
            It can be selectively specified from among the OP names specified
            in --input_op_names_and_order_dims.
            OP names not specified in --input_op_names_and_order_dims are ignored.
            Multiple times can be specified as many times as the number of OP names specified
            in --input_op_names_and_order_dims.
            --channel_change_inputs op_name dimension_number_representing_the_channel
            dimension_number_representing_the_channel must specify the dimension position before
            the change in input_op_names_and_order_dims.
            For example, dimension_number_representing_the_channel is 1 for NCHW and 3 for NHWC.
      
            e.g.
            --channel_change_inputs aaa 3 \
            --channel_change_inputs bbb 1 \
            --channel_change_inputs ccc 5
      
        -n, --non_verbose
            Do not show all information logs. Only error logs are displayed.
      

    Full Changelog: https://github.com/PINTO0309/scc4onnx/compare/1.0.4...1.0.5

    Source code(tar.gz)
    Source code(zip)
  • 1.0.4(May 25, 2022)

  • 1.0.3(May 15, 2022)

  • 1.0.2(May 10, 2022)

  • 1.0.1(Apr 19, 2022)

  • 1.0.0(Apr 18, 2022)

Owner
Katsuya Hyodo
Hobby programmer. Intel Software Innovator Program member.
Katsuya Hyodo
Image Restoration Using Swin Transformer for VapourSynth

SwinIR SwinIR function for VapourSynth, based on https://github.com/JingyunLiang/SwinIR. Dependencies NumPy PyTorch, preferably with CUDA. Note that t

Holy Wu 11 Jun 19, 2022
Official repository of DeMFI (arXiv.)

DeMFI This is the official repository of DeMFI (Deep Joint Deblurring and Multi-Frame Interpolation). [ArXiv_ver.] Coming Soon. Reference Jihyong Oh a

Jihyong Oh 56 Dec 14, 2022
A transformer model to predict pathogenic mutations

MutFormer MutFormer is an application of the BERT (Bidirectional Encoder Representations from Transformers) NLP (Natural Language Processing) model wi

Wang Genomics Lab 2 Nov 29, 2022
Implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

SemCo The official pytorch implementation of the paper All Labels Are Not Created Equal: Enhancing Semi-supervision via Label Grouping and Co-training

42 Nov 14, 2022
Code for CPM-2 Pre-Train

CPM-2 Pre-Train Pre-train CPM-2 此分支为110亿非 MoE 模型的预训练代码,MoE 模型的预训练代码请切换到 moe 分支 CPM-2技术报告请参考link。 0 模型下载 请在智源资源下载页面进行申请,文件介绍如下: 文件名 描述 参数大小 100000.tar

Tsinghua AI 136 Dec 28, 2022
The code of “Similarity Reasoning and Filtration for Image-Text Matching” [AAAI2021]

SGRAF PyTorch implementation for AAAI2021 paper of “Similarity Reasoning and Filtration for Image-Text Matching”. It is built on top of the SCAN and C

Ronnie_IIAU 149 Dec 22, 2022
EfficientNetV2-with-TPU - Cifar-10 case study

EfficientNetV2-with-TPU EfficientNet EfficientNetV2 adalah jenis jaringan saraf convolutional yang memiliki kecepatan pelatihan lebih cepat dan efisie

Sultan syach 1 Dec 28, 2021
A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python.

c is for Camera A 35mm camera, based on the Canonet G-III QL17 rangefinder, simulated in Python. The purpose of this project is to explore and underst

Daniele Procida 146 Sep 26, 2022
A PyTorch implementation of a Factorization Machine module in cython.

fmpytorch A library for factorization machines in pytorch. A factorization machine is like a linear model, except multiplicative interaction terms bet

Jack Hessel 167 Jul 06, 2022
Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021)

Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021) This repository contains the code for our ICCV2021 paper by Jia-Ren Cha

Jia-Ren Chang 40 Dec 27, 2022
Language Models Can See: Plugging Visual Controls in Text Generation

Language Models Can See: Plugging Visual Controls in Text Generation Authors: Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lin

Yixuan Su 195 Dec 22, 2022
Python implementation of ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images, AAAI2022.

ADD: Frequency Attention and Multi-View based Knowledge Distillation to Detect Low-Quality Compressed Deepfake Images Binh M. Le & Simon S. Woo, "ADD:

2 Oct 24, 2022
《LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification》(AAAI 2021) GitHub:

LightXML: Transformer with dynamic negative sampling for High-Performance Extreme Multi-label Text Classification

76 Dec 05, 2022
Boosted CVaR Classification (NeurIPS 2021)

Boosted CVaR Classification Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar NeurIPS 2021 Table of Contents Quick Start Train

Runtian Zhai 4 Feb 15, 2022
LBBA-boosted WSOD

LBBA-boosted WSOD Summary Our code is based on ruotianluo/pytorch-faster-rcnn and WSCDN Sincerely thanks for your resources. Newer version of our code

Martin Dong 20 Sep 19, 2022
Next-gen Rowhammer fuzzer that uses non-uniform, frequency-based patterns.

Blacksmith Rowhammer Fuzzer This repository provides the code accompanying the paper Blacksmith: Scalable Rowhammering in the Frequency Domain that is

Computer Security Group @ ETH Zurich 173 Nov 16, 2022
SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches

SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [Paper]  [Project Page]  [Interactive Demo]  [Supplementary Material]        Usag

215 Dec 25, 2022
An Unpaired Sketch-to-Photo Translation Model

Unpaired-Sketch-to-Photo-Translation We have released our code at https://github.com/rt219/Unsupervised-Sketch-to-Photo-Synthesis This project is the

38 Oct 28, 2022
pytorch implementation of GPV-Pose

GPV-Pose Pytorch implementation of GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting. (link) UPDATE A new version

40 Dec 01, 2022
[ICCV 2021] Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation

ADDS-DepthNet This is the official implementation of the paper Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation I

LIU_LINA 52 Nov 24, 2022