当前位置:网站首页>Environment build onnxruntime 】

Environment build onnxruntime 】

2022-08-09 09:16:00 YunZhe.

1, Introduction
  onnxruntime is an engine for onnx model inference.

2, install
2.1 cuda,cudnn
2.2 cmake, version>=3.13.0

sudo apt-get install libssl-devsudo apt-get autoremove cmake # uninstallwget https://cmake.org/files/v3.17/cmake-3.17.3.tar.gztar -xf cmake-3.17.3cd cmake-3.17.3./bootstrapmake -j 8sudo make installcmake -versioncmake version 3.17.3CMake suite maintained and supported by Kitware (kitware.com/cmake).

2.3 tensorrt
2.4 onnxruntime

conda activate py36 # Switch virtual environmentgit clone https://github.com/microsoft/onnxruntime.gitcd onnxruntimegit submodule syncgit submodule update --init --recursive./build.sh \--use_cuda \--cuda_version=11.0 \--cuda_home=/usr/local/cuda \--cudnn_home=/usr/local/cuda \--use_tensorrt --tensorrt_home=$TENSORRT_ROOT \--build_shared_lib --enable_pybind \--build_wheel --update --buildpip build/Linux/Debug/dist/onnxruntime_gpu_tensorrt-1.6.0-cp36-cp36m-linux_x86_64.whl# Get the specified versiongit clone -b v1.6.0 https://github.com/microsoft/onnxruntime.gitcd onnxruntimegit checkout -b v1.6.0git submodule syncgit submodule update --init --recursive

3,pip install

pip install onnxruntime-gpu==1.6.0 onnx==1.9.0 onnxconverter_common==1.6.0 # cuda 10.2pip install onnxruntime-gpu==1.8.1 onnx==1.9.0 onnxconverter_common==1.8.1 # cuda 11.0import onnxruntimeonnxruntime.get_available_providers()sess = onnxruntime.InferenceSession(onnx_path)sess.set_providers(['CUDAExecutionProvider'], [ {'device_id': 0}])result = sess.run([output_name], {input_name:x}) # x is input

3, onnxruntime and pytorch inference

# -*- coding: utf-8 -*-import torchfrom torchvision import modelsimport onnxruntimeimport numpy as npmodel = models.resnet18(pretrained=True)model.eval().cuda()##pytorchx = torch.rand(2,3,224,224).cuda()out_pt = model(x)print(out_pt.size())# onnxonnx_path = "resnet18.onnx"dynamic_axes = {'input': {0: 'batch_size'}}torch.onnx.export(model,x,onnx_path,export_params=True,opset_version=11,do_constant_folding=True,input_names=['input'],dynamic_axes=dynamic_axes)sess = onnxruntime.InferenceSession(onnx_path)sess.set_providers(['CUDAExecutionProvider'], [ {'device_id': 0}])input_name = sess.get_inputs()[0].nameoutput_name = sess.get_outputs()[0].name# onnxruntime inferencexx = x.cpu().numpy().astype(np.float32)result = sess.run([output_name], {input_name:xx}) # x is inputprint(result[0].shape)#MSEmse = np.mean((out_pt.data.cpu().numpy() - result[0]) ** 2)print(mse)

Output results

torch.Size([2, 1000])(2, 1000)9.275762e-13
原网站

版权声明
本文为[YunZhe.]所创,转载请带上原文链接,感谢
https://yzsam.com/2022/221/202208090903330430.html