博主您好,我看到您有发布关于mmdeploy项目使用方法的内容,所以想向您请教一下我在转化mmdet3d模型中所遇到的问题
问题描述
模型文件:centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20220810_025930-657f67e0.pth
模型配置文件:centerpoint_voxel0075_second_secfpn_head-dcn-circlenms_8xb4-cyclic-20e_nus-3d.py
问题:使用mmdeploy尝试将模型文件转换为onnx文件时频繁报错,使用的是onnxruntime-gpu推理引擎,不是很明白RuntimeError: get_indice_pairs is not implemented on CPU为什么会发生。而使用官方例程中的centerpoint_pillar02_second_secfpn_head-circlenms_8xb4-cyclic-20e_nus-3d模型及其配置文件就可以顺利转化,这也说明我的环境大致是每问题的
#报错信息
(mmdeploy) archerr@archerr-MS-7D17:~/mmdeploy-1.3.1$ python3 tools/deploy.py configs/mmdet3d/voxel-detection/voxel-detection_onnxruntime_dynamic.py $MODEL_CONFIG $MODEL_PATH $TEST_DATA --work-dir centerpoint --device cuda
02/26 22:32:22 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
/home/archerr/mmdetection3d-main/mmdet3d/evaluation/functional/kitti_utils/eval.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41):
02/26 22:32:23 - mmengine - WARNING - Failed to search registry with scope "mmdet3d" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet3d" is a correct scope, or whether the registry is initialized.
02/26 22:32:23 - mmengine - WARNING - Failed to search registry with scope "mmdet3d" in the "mmdet3d_tasks" registry tree. As a workaround, the current "mmdet3d_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmdet3d" is a correct scope, or whether the registry is initialized.
Loads checkpoint by local backend from path: centerpoint_0075voxel_second_secfpn_dcn_circlenms_4x8_cyclic_20e_nus_20220810_025930-657f67e0.pth
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.0.feature_adapt_cls is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.0.feature_adapt_reg is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.1.feature_adapt_cls is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.1.feature_adapt_reg is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.2.feature_adapt_cls is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.2.feature_adapt_reg is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.3.feature_adapt_cls is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.3.feature_adapt_reg is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.4.feature_adapt_cls is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.4.feature_adapt_reg is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.5.feature_adapt_cls is upgraded to version 2.
02/26 22:32:26 - mmengine - INFO - DeformConv2dPack pts_bbox_head.task_heads.5.feature_adapt_reg is upgraded to version 2.
02/26 22:32:26 - mmengine - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future.
02/26 22:32:26 - mmengine - INFO - Export PyTorch model to ONNX: centerpoint/end2end.onnx.
02/26 22:32:26 - mmengine - WARNING - Can not find torch.nn.functional.scaled_dot_product_attention, function rewrite will not be applied
02/26 22:32:26 - mmengine - WARNING - Can not find torch._C._jit_pass_onnx_autograd_function_process, function rewrite will not be applied
/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/mmcv/ops/sparse_ops.py:91: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if ndim == 2:
/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/mmcv/ops/sparse_ops.py:93: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
elif ndim == 3:
Process Process-2:
Traceback (most recent call last):
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/apis/core/pipeline_manager.py", line 107, in call
ret = func(*args, **kwargs)
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx
export(
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/apis/core/pipeline_manager.py", line 356, in wrap
return self.call_function(func_name, *args, **kwargs)
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
return self.call_function_local(func_name, *args, **kwargs)
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
return pipe_caller(*args, **kwargs)
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/apis/core/pipeline_manager.py", line 107, in call
ret = func(*args, **kwargs)
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/apis/onnx/export.py", line 138, in export
torch.onnx.export(
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/onnx/init.py", line 316, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/onnx/utils.py", line 107, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/onnx/utils.py", line 724, in _export
_model_to_graph(model, args, verbose, input_names,
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/apis/onnx/optimizer.py", line 27, in model_to_graph__custom_optimizer
graph, params_dict, torch_out = ctx.origin_func(*args, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/onnx/utils.py", line 493, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/onnx/utils.py", line 437, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/onnx/utils.py", line 388, in _trace_and_get_graph_from_model
torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/jit/_trace.py", line 1166, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/jit/_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/jit/_trace.py", line 118, in wrapper
outs.append(self.inner(*trace_inputs))
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1090, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/apis/onnx/export.py", line 123, in wrapper
return forward(*arg, **kwargs)
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/codebase/mmdet3d/models/mvx_two_stage.py", line 83, in mvxtwostagedetector__forward
_, pts_feats = self.extract_feat(batch_inputs_dict=batch_inputs_dict)
File "/home/archerr/mmdeploy-1.3.1/mmdeploy/codebase/mmdet3d/models/mvx_two_stage.py", line 42, in mvxtwostagedetector__extract_feat
pts_feats = self.extract_pts_feat(
File "/home/archerr/mmdetection3d-main/mmdet3d/models/detectors/mvx_two_stage.py", line 214, in extract_pts_feat
x = self.pts_middle_encoder(voxel_features, voxel_dict['coors'],
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1090, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/contextlib.py", line 75, in inner
return func(*args, **kwds)
File "/home/archerr/mmdetection3d-main/mmdet3d/models/middle_encoders/sparse_encoder.py", line 145, in forward
x = self.conv_input(input_sp_tensor)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1090, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/mmcv/ops/sparse_modules.py", line 135, in forward
input = module(input)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1090, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/mmcv/ops/sparse_conv.py", line 157, in forward
outids, indice_pairs, indice_pair_num = ops.get_indice_pairs(
File "/home/archerr/.conda/envs/mmdeploy/lib/python3.8/site-packages/mmcv/ops/sparse_ops.py", line 99, in get_indice_pairs
return get_indice_pairs_func(indices, batch_size, out_shape,
RuntimeError: get_indice_pairs is not implemented on CPU
02/26 22:32:27 - mmengine - ERROR - /home/archerr/mmdeploy-1.3.1/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - mmdeploy.apis.pytorch2onnx.torch2onnx
with Call id: 0 failed. exit.