当前位置:网站首页>【环境搭建】onnx-tensorrt

【环境搭建】onnx-tensorrt

2022-08-09 09:03:00 .云哲.

1,介绍
onnx-tensorrt是一个模型推理框架。

2,安装
2.1 cuda,cudnn
2.2 cmake
2.3 protobuf,版本>=3.8.x

sudo apt-get install autoconf automake libtool curl make g++ unzip
sudo apt-get autoremove libprotobuf-dev protobuf-compiler # 卸载
git clone https://github.com/google/protobuf.git
cd protobuf
git submodule sync
git submodule update --init --recursive 
./autogen.sh
./configure
make -j 8
make check
sudo make install
sudo ldconfig # refresh shared library cache.
protoc --version

export PATH=$PATH:/usr/local/protobuf/bin/ 
export PKG_CONFIG_PATH=/usr/local/protobuf/lib/pkgconfig/

2.4 tensorrt,tensorrt/lib -> /usr/lib, 版本:7.1.3.4
2.5 onnx-tensorrt

sudo apt-get install libprotobuf-dev protobuf-compiler
sudo apt-get install swig
git clone --branch 7.1 https://github.com/onnx/onnx-tensorrt.git 
cd onnx-tensorrt
git submodule sync
git submodule update --init --recursive 
mkdir build && cd build
cmake .. -DTENSORRT_ROOT=$TENSORRT_ROOT
make -j 8
sudo make install

注意:
1,vim setup.py -> 添加 INC_DIRS = ["$HOME/TensorRT/include"]
2,NvOnnxParser.h -> 添加 #define TENSORRTAPI

3,应用

python setup.py build # 虚拟环境
python setup.py install
python
>>> import onnx
>>> import onnx_tensorrt.backend as backend
>>> import numpy as np

>>> model = onnx.load("model.onnx")
>>> engine = backend.prepare(model, device='CUDA:0')
>>> input_data = np.random.random(size=(32, 3, 224, 224)).astype(np.float32)
>>> output_data = engine.run(input_data)[0]
>>> print(output_data)
>>> print(output_data.shape)
原网站

版权声明
本文为[.云哲.]所创,转载请带上原文链接,感谢
https://blog.csdn.net/luolinll1212/article/details/107097161