weixin_39532352
weixin_39532352
2021-01-08 06:45

there are something wrong with the official yolov3_onnx sample

I try to run the yolov3_onnx sample. When I run python yolov3_to_onnx.py. It denotes Layer of type yolo not supported, skipping ONNX node generation. Layer of type yolo not supported, skipping ONNX node generation. Layer of type yolo not supported, skipping ONNX node generation. finally ,it denotes ==> Context: Bad node spec: input: "085_convolutional_lrelu" input: "086_upsample_scale" output: "086_upsample" name: "086_upsample" op_type: "Upsample" attribute { name: "mode" s: "nearest" type: STRING }

and ONNX file yolov3.onnx is not created

该提问来源于开源项目:NVIDIA/TensorRT

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

15条回答

  • weixin_39895684 weixin_39895684 4月前

    Hi, I fixed this problem by replacing onnx version from 1.4.1 to 1.2.1.

    点赞 评论 复制链接分享
  • weixin_39791653 weixin_39791653 4月前

    I installed 1.5.1 , 1.2.1 and 1.1.1(as recommended in penolove/yolov3-tensorrt repository) and again reach to this error or warning:

    
    Layer of type yolo not supported, skipping ONNX node generation.
    Layer of type yolo not supported, skipping ONNX node generation.
    Layer of type yolo not supported, skipping ONNX node generation.
    
    点赞 评论 复制链接分享
  • weixin_39895684 weixin_39895684 4月前

    -hayati Hi, if you see there is a .onnx file created with the same name as your model, there will be no problem. I also saw such warnings but the generated model turns out to be good.

    点赞 评论 复制链接分享
  • weixin_39791653 weixin_39791653 4月前

    Thanks for your fast response. Now I have a new problem.

    when I want to install TensorRT via deb or via cloning from github: TensorRT Open Source Software and cmake it, It gives me the following errors:

    First Method:

    cmake .. -DTRT_LIB_DIR=$TRT_RELEASE/lib -DTRT_BIN_DIR=pwd/out

    
    Building for TensorRT version: 6.0.1.0, library version: 6.0.1
    -- Targeting TRT Platform: x86_64
    -- CUDA version set to 10.1
    -- cuDNN version set to 7.5
    -- Protobuf version set to 3.0.0
    -- Using libprotobuf /home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/build/third_party.protobuf/lib/libprotobuf.a
    -- ========================= Importing and creating target nvinfer ==========================
    -- Looking for library nvinfer
    -- Library that was found nvinfer_LIB_PATH-NOTFOUND
    -- ==========================================================================================
    -- ========================= Importing and creating target nvuffparser ==========================
    -- Looking for library nvparsers
    -- Library that was found nvparsers_LIB_PATH-NOTFOUND
    -- ==========================================================================================
    -- Protobuf proto/trtcaffe.proto -> proto/trtcaffe.pb.cc proto/trtcaffe.pb.h
    -- /home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/build/parsers/caffe
    -- 
    -- ******** Summary ********
    --   CMake version         : 3.15.3
    --   CMake command         : /usr/local/bin/cmake
    --   System                : Linux
    --   C++ compiler          : /usr/bin/g++
    --   C++ compiler version  : 7.4.0
    --   CXX flags             : -Wno-deprecated-declarations  -DBUILD_SYSTEM=cmake_oss -Wall -Wno-deprecated-declarations -Wno-unused-function -Wnon-virtual-dtor
    --   Build type            : Release
    --   Compile definitions   : _PROTOBUF_INSTALL_DIR=/home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/build;ONNX_NAMESPACE=onnx2trt_onnx
    --   CMAKE_PREFIX_PATH     : 
    --   CMAKE_INSTALL_PREFIX  : /lib/..
    --   CMAKE_MODULE_PATH     : 
    -- 
    --   ONNX version          : 1.3.0
    --   ONNX NAMESPACE        : onnx2trt_onnx
    --   ONNX_BUILD_TESTS      : OFF
    --   ONNX_BUILD_BENCHMARKS : OFF
    --   ONNX_USE_LITE_PROTO   : OFF
    --   ONNXIFI_DUMMY_BACKEND : OFF
    -- 
    --   Protobuf compiler     : 
    --   Protobuf includes     : 
    --   Protobuf libraries    : 
    --   BUILD_ONNX_PYTHON     : OFF
    -- GPU_ARCH is not defined. Generating CUDA code for default SMs.
    -- Found TensorRT headers at /home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/include
    -- Find TensorRT libs at TENSORRT_LIBRARY_INFER-NOTFOUND;TENSORRT_LIBRARY_INFER_PLUGIN-NOTFOUND
    -- Could NOT find TENSORRT (missing: TENSORRT_LIBRARY) 
    ERRORCannot find TensorRT library.
    -- Adding new sample: sample_char_rnn
    --     - Parsers Used: none
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_dynamic_reshape
    --     - Parsers Used: onnx
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_fasterRCNN
    --     - Parsers Used: caffe
    --     - InferPlugin Used: ON
    --     - Licensing: opensource
    -- Adding new sample: sample_googlenet
    --     - Parsers Used: caffe
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_int8
    --     - Parsers Used: caffe
    --     - InferPlugin Used: ON
    --     - Licensing: opensource
    -- Adding new sample: sample_int8_api
    --     - Parsers Used: onnx
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_mlp
    --     - Parsers Used: caffe
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_mnist
    --     - Parsers Used: caffe
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_mnist_api
    --     - Parsers Used: caffe
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_movielens
    --     - Parsers Used: uff
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_movielens_mps
    --     - Parsers Used: uff
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_nmt
    --     - Parsers Used: none
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_onnx_mnist
    --     - Parsers Used: onnx
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_plugin
    --     - Parsers Used: caffe
    --     - InferPlugin Used: ON
    --     - Licensing: opensource
    -- Adding new sample: sample_reformat_free_io
    --     - Parsers Used: caffe
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_ssd
    --     - Parsers Used: caffe
    --     - InferPlugin Used: ON
    --     - Licensing: opensource
    -- Adding new sample: sample_uff_fasterRCNN
    --     - Parsers Used: uff
    --     - InferPlugin Used: ON
    --     - Licensing: opensource
    -- Adding new sample: sample_uff_maskRCNN
    --     - Parsers Used: uff
    --     - InferPlugin Used: ON
    --     - Licensing: opensource
    -- Adding new sample: sample_uff_mnist
    --     - Parsers Used: uff
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_uff_plugin_v2_ext
    --     - Parsers Used: uff
    --     - InferPlugin Used: OFF
    --     - Licensing: opensource
    -- Adding new sample: sample_uff_ssd
    --     - Parsers Used: uff
    --     - InferPlugin Used: ON
    --     - Licensing: opensource
    -- Adding new sample: trtexec
    --     - Parsers Used: caffe;uff;onnx
    --     - InferPlugin Used: ON
    --     - Licensing: opensource
    CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
    Please set them or make sure they are set and tested correctly in the CMake files:
    TENSORRT_LIBRARY_INFER
        linked by target "nvonnxparser_static" in directory /home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/parsers/onnx
        linked by target "nvonnxparser" in directory /home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/parsers/onnx
    TENSORRT_LIBRARY_INFER_PLUGIN
        linked by target "nvonnxparser_static" in directory /home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/parsers/onnx
        linked by target "nvonnxparser" in directory /home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/parsers/onnx
    
    -- Configuring incomplete, errors occurred!
    See also "/home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/build/CMakeFiles/CMakeOutput.log".
    See also "/home/aistation/Downloads/FlareGet/Compressed/Applications/TensorRT/build/CMakeFiles/CMakeError.log".
    
    

    Second Method: Installing TensorRT in Ubuntu Desktop sudo apt-get install tensorrt

    
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    Some packages could not be installed. This may mean that you have
    requested an impossible situation or if you are using the unstable
    distribution that some required packages have not yet been created
    or been moved out of Incoming.
    The following information may help to resolve the situation:
    
    The following packages have unmet dependencies:
     tensorrt : Depends: libnvinfer5 (= 5.0.2-1+cuda10.0) but it is not going to be installed
                Depends: libnvinfer-dev (= 5.0.2-1+cuda10.0) but it is not going to be installed
                Depends: libnvinfer-samples (= 5.0.2-1+cuda10.0) but it is not going to be installed
    E: Unable to correct problems, you have held broken packages.
    
    点赞 评论 复制链接分享
  • weixin_39895684 weixin_39895684 4月前

    -hayati May I know if your installation through first method is executed in the container that the instruction told you to run and if you have downloaded the TensorRT binary release?

    点赞 评论 复制链接分享
  • weixin_39791653 weixin_39791653 4月前

    Thanks for your fast reply.

    I tried to install it directly, without using docker and container. I downloaded binary installation file and clone git files.

    点赞 评论 复制链接分享
  • weixin_39947396 weixin_39947396 4月前

    -hayati i met the same problems as you...i tried the 1.2.1,1.4.1,1.6 ... so what is the finall solution? thx!

    点赞 评论 复制链接分享
  • weixin_39791653 weixin_39791653 4月前

    I try to use these methods to convert yolo weights to onnx then to tensorrt, but I could not using available methods in github repos. So I try to use deep stream by installing it in Jetson Nano card. Please see Trying to convert .onnx file to .trt and reach to this error: [5] Assertion failed: tensors.count(output.name()) #274

    点赞 评论 复制链接分享
  • weixin_39532352 weixin_39532352 4月前

    try search yolo tensorrt ,and select python language ,Cw-zero/TensorRT_yolo3 。I use its yolo_to_onnx. py get the .onnx

    点赞 评论 复制链接分享
  • weixin_39791653 weixin_39791653 4月前

    try search yolo tensorrt ,and select python language ,Cw-zero/TensorRT_yolo3 。I use its yolo_to_onnx. py get the .onnx

    Converting yolo (from darknet) weights to onnx is easy, hardest part is converting onnx to tensorrt

    点赞 评论 复制链接分享
  • weixin_39532352 weixin_39532352 4月前

    the official yolo layer have pool performance. try search yolo tensorrt ,select python language,select xuwanqi/yolov3-tensorrt. in the pull requests ,it gives the solutions

    点赞 评论 复制链接分享
  • weixin_39707941 weixin_39707941 4月前

    -hayati this repo is just the open source components of TensorRT. As mentioned in the README, you have to first install the full release of TensorRT: https://developer.nvidia.com/nvidia-tensorrt-download and then this will install several things, including libnvinfer*, which are required for building the OSS components, which you had an issue with when building in this comment: https://github.com/NVIDIA/TensorRT/issues/111#issuecomment-532689019

    点赞 评论 复制链接分享
  • weixin_39707941 weixin_39707941 4月前

    I've been able to run the yolov3_onnx python sample a few times recently. It should work without the OSS components, just after installing the release.

    However, I tend to use the NGC container over installing the source to avoid the host-side dependencies when possible.

    You can try to run the sample using nvcr.io/nvidia/tensorrt:19:10-py3 and see if that works for you. Docs on NGC/docker container: https://ngc.nvidia.com/catalog/containers/nvidia:tensorrt

    点赞 评论 复制链接分享
  • weixin_39707941 weixin_39707941 4月前

    Layer of type yolo not supported, skipping ONNX node generation. Layer of type yolo not supported, skipping ONNX node generation. Layer of type yolo not supported, skipping ONNX node generation.

    FYI, I'm pretty sure this warning is harmless / doesn't affect the success of the network.

    点赞 评论 复制链接分享
  • weixin_39929259 weixin_39929259 4月前

    python示例几次。在安装发行版之后,它无需OSS组件即可工作。

    但是,在可能的情况下,我倾向于在安装源代码时使用NGC容器,以避免主机端的依赖性。

    您可以尝试使用运行该示例,nvcr.io/nvidia/tensorrt:19:10-py3然后查看是否适合您。NGC / docker容器上的文档:https ://ngc.nvidia.com/catalog/containers/nvidia: tensorrt

    yolov3_ To_ onnx.py My environment is as follows:

    TensorRT Version: 6.0.1.5

    GPU Type: 2080Ti

    CUDA Version: 10.0

    CUDNN Version: 7.6.5

    Operating System + Version: ubuntu 18.04

    python3.6

    pytorch1.4.0

    onnx1.5.0

    The rest of this sample can be run with either version of Python

    Then the code is masked: if sys.version_ Info [0] > 2: the error is as follows:

    TypeError: Unicode-objects must be encoded before hashing

    Can I refer to your conversion code? thank you!

    点赞 评论 复制链接分享

相关推荐