Thomas_Cai 2023-12-13 14:15 采纳率: 0%
浏览 7

About BatchSize failure of TensorRT-8.0.1.6 when running trtexec tool on NVIDIA Jetson Xavier NX

Description

Error Info (When I run “/usr/src/tensorrt/bin/trtexec --loadEngine=XXX.engine --batch=40 --warmUp=200 --iterations=2000”)

Error[3]: [executionContext.cpp::enqueue::276] Error Code 3: Internal Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueue::276, condition: batchSize > 0 && batchSize <= mEngine.getMaxBatchSize(). Note: Batch size was: 40, but engine max batch size was: 1
)
[12/12/2023-21:35:03] [E] Error occurred during inference

Here is my script to convert the engine according to https://docs.nvidia.com/deeplearning/tensorrt/archives/tensorrt-801/quick-start-guide/index.html

/usr/src/tensorrt/bin/trtexec --onnx=${1} --saveEngine=${2}  --buildOnly --workspace=10240 --explicitBatch --minShapes=input:1x3x224x224 --optShapes=input:40x3x224x224 --maxShapes=input:40x3x224x224 --fp16

Note: By the way, my Jetson has a fixed version, so I have to solve it in this version. I look forward to your reply, thank you.

Environment

TensorRT Version: 8.0.1.6

NVIDIA GPU: NVIDIA Jetson Xavier NX

CUDA Version: 10.2

CUDNN Version: 8.2.1.32

Operating System: ubuntu18.04

Jetpack: 4.6-b199

  • 写回答

1条回答 默认 最新

  • 小孟多 2023-12-13 14:21
    关注

    问题描述:

    在NVIDIA Jetson Xavier NX上使用TensorRT 8.0.1.6版本的trtexec工具时,当运行参数设置为--batch=40时会出现错误,报错信息提示batchSize > 0 && batchSize <= mEngine.getMaxBatchSize()参数检查失败,而实际上batch size为40,但是引擎的最大批处理大小为1。

    解决方案:

    这个错误的原因是引擎的最大批处理大小小于指定的批处理大小。在这种情况下,我们需要重新生成引擎,并将--maxBatch设置为更高的值。可以尝试将--optShapes和--maxShapes参数设置为相同的值,以避免出现此问题。如果这不起作用,可以尝试逐步增加--maxBatch的值,直到满足实际需求。另外,也可以通过调整模型的输入形状来使引擎支持更高的批处理大小。
    Problem:

    When using the trtexec tool of TensorRT 8.0.1.6 on NVIDIA Jetson Xavier NX, an error occurs when the --batch=40 option is set. The error message indicates that "Parameter check failed at: runtime/api/executionContext.cpp::enqueue::276, condition: batchSize > 0 && batchSize <= mEngine.getMaxBatchSize(). Note: Batch size was: 40, but engine max batch size was: 1".

    Solution:

    The reason for this error is that the maximum batch size of the engine is less than the specified batch size. In this case, we need to regenerate the engine and set --maxBatch to a higher value. You can try setting the --optShapes and --maxShapes parameters to the same value to avoid this problem. If this doesn't work, you can try gradually increasing the value of --maxBatch until it meets your actual needs. Alternatively, you can adjust the input shape of the model to make the engine support higher batch sizes.

    评论

报告相同问题?

问题事件

  • 创建了问题 12月13日

悬赏问题

  • ¥20 BAPI_PR_CHANGE how to add account assignment information for service line
  • ¥500 火焰左右视图、视差(基于双目相机)
  • ¥100 set_link_state
  • ¥15 虚幻5 UE美术毛发渲染
  • ¥15 CVRP 图论 物流运输优化
  • ¥15 Tableau online 嵌入ppt失败
  • ¥100 支付宝网页转账系统不识别账号
  • ¥15 基于单片机的靶位控制系统
  • ¥15 真我手机蓝牙传输进度消息被关闭了,怎么打开?(关键词-消息通知)
  • ¥15 装 pytorch 的时候出了好多问题,遇到这种情况怎么处理?