weixin_39981681
weixin_39981681
2020-12-09 10:06

Error when convert .weights to onnx

$ python3 yolov3_to_onnx.py Layer of type yolo not supported, skipping ONNX node generation. Layer of type yolo not supported, skipping ONNX node generation. Layer of type yolo not supported, skipping ONNX node generation. graph YOLOv3-608 ( %000_net[FLOAT, 64x3x416x416] ) initializers (

105_convolutional, %105_convolutional_bn_scale, %105_convolutional_bn_bias, %105_convolutional_bn_mean, %105_convolutional_bn_var) %105_convolutional_lrelu = LeakyRelualpha = 0.1 %106_convolutional = Convauto_pad = 'SAME_LOWER', dilations = [1, 1], kernel_shape = [1, 1], strides = [1, 1] return %082_convolutional, %094_convolutional, %106_convolutional } Traceback (most recent call last): File "yolov3_to_onnx.py", line 749, in main() File "yolov3_to_onnx.py", line 741, in main onnx.checker.check_model(yolov3_model_def) File "/home/nvidia/.local/lib/python3.6/site-packages/onnx/checker.py", line 86, in check_model C.check_model(model.SerializeToString()) onnx.onnx_cpp2py_export.checker.ValidationError: Op registered for Upsample is depracted in domain_version of 10

==> Context: Bad node spec: input: "085_convolutional_lrelu" input: "086_upsample_scale" output: "086_upsample" name: "086_upsample" op_type: "Upsample" attribute { name: "mode" s: "nearest" type: STRING }

该提问来源于开源项目:Stephenfang51/tracklite

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

9条回答

  • weixin_39727934 weixin_39727934 5月前

    Context: Bad node spec: input: "085_convolutional_lrelu" input: "086_upsample_scale" output: "086_upsample" name: "086_upsample" op_type: "Upsample" attribute { name: "mode" s: "nearest" type: STRING }

    Dear Sir

    What is your onnx version?

    after googling the issue, the problem is your onnx version, please try onnx==1.4.0 or 1.4.1

    Thanks

    点赞 评论 复制链接分享
  • weixin_39981681 weixin_39981681 5月前

    my onnx version is 1.5. Can I use "/usr/src/tensorrt/samples/python/yolov3_onnx" example to convert .weights to .onnx and then .trt. and then use the final .trt in this project.

    点赞 评论 复制链接分享
  • weixin_39981681 weixin_39981681 5月前

    $python3 run_tracker.py --usb Opening in BLOCKING MODE Loading weights from ./deep_sort/deep/checkpoint/ckpt.t7... Done! Reading engine from file ./weights/yolov3_int8.engine [TensorRT] ERROR: deserializationUtils.cpp (528) - Serialization Error in load: 0 (Serialized engine contains plugin, but no plugin factory was provided. To deserialize an engine without a factory, please use IPluginV2 instead.) [TensorRT] ERROR: INVALID_STATE: std::exception [TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed. Traceback (most recent call last): File "run_tracker.py", line 107, in main() File "run_tracker.py", line 88, in main tracker = Tracker(cfg, args.engine_path) File "/home/nvidia/workspace/detect_tracking/tracklite/tracker/tracker.py", line 24, in init self.context = self.engine.create_execution_context() AttributeError: 'NoneType' object has no attribute 'create_execution_context'

    what's the problem?

    点赞 评论 复制链接分享
  • weixin_39727934 weixin_39727934 5月前

    my onnx version is 1.5. Can I use "/usr/src/tensorrt/samples/python/yolov3_onnx" example to convert .weights to .onnx and then .trt. and then use the final .trt in this project.

    I think it's inappropriate since the project based on lower version of tensorRT and onnx, that is why the error message shows [TensorRT] ERROR: deserializationUtils.cpp (528) - Serialization Error in load: 0 (Serialized engine contains plugin, but no plugin factory was provided. To deserialize an engine without a factory, please use IPluginV2 instead.)

    sorry for the low version of trt and onnx, the project is based on JetsonNano.

    点赞 评论 复制链接分享
  • weixin_39981681 weixin_39981681 5月前

    i use xavier, my tensorrt version is 6, onnx version is 1.5. Is it possible to use tensorrt 5 and onnx version 1.4. Then is can degrade onnx to 1.4.

    点赞 评论 复制链接分享
  • weixin_39727934 weixin_39727934 5月前

    i use xavier, my tensorrt version is 6, onnx version is 1.5. Is it possible to use tensorrt 5 and onnx version 1.4. Then is can degrade onnx to 1.4.

    of course it can be, I was running this on trt5 and onnx 1.4

    点赞 评论 复制链接分享
  • weixin_39981681 weixin_39981681 5月前

    I use trt6 and onnx 1.4 on xavier, it cannot work. same mistakes occur

    do you have plan to update the project to higher trt and onnx version?

    点赞 评论 复制链接分享
  • weixin_39620334 weixin_39620334 5月前

    I'm getting the same issue too 😭 using onnx 1.4.1 and I've tried 1.4.0. Has anyone found a fix for this? I'd love to try this out it's really great to see Deep Sort and Yolo put together for Jetson Nano!

    点赞 评论 复制链接分享
  • weixin_40004057 weixin_40004057 5月前

    +1 . I have the same problem during python3 run_tracker.py . My TensorRT version is 7.1.0-1. Note: I have converted weights > onnx > engine using other TensorRT version.

    点赞 评论 复制链接分享

相关推荐