智晨爱代码 2023-07-22 10:06 采纳率: 71.4%
浏览 46
已结题

按照步骤安装完yolov8但还是会报错

按照步骤安装完yolov8但还是会报错

(ultralytics) PS F:\yolov8\ultralytics> yolo task=detect mode=predict model=yolov8n.pt conf=0.25 source='ultralytics/assets/bus.jpg'        
Ultralytics YOLOv8.0.139  Python-3.9.17 torch-2.0.1 CUDA:0 (NVIDIA GeForce GTX 1660 Ti with Max-Q Design, 6144MiB)
YOLOv8n summary (fused): 168 layers, 3151904 parameters, 0 gradients

Traceback (most recent call last):
  File "D:\Anaconda3\envs\ultralytics\lib\runpy.py", line 197, in _run_module_as_main                             
    return _run_code(code, main_globals, None,                                                                    
  File "D:\Anaconda3\envs\ultralytics\lib\runpy.py", line 87, in _run_code                                        
    exec(code, run_globals)                                                                                       
  File "D:\Anaconda3\envs\ultralytics\Scripts\yolo.exe\__main__.py", line 7, in <module>                          
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\cfg\__init__.py", line 410, in entrypoint     
    getattr(model, mode)(**overrides)  # default args from model                                                  
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)                                                                                  
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\engine\model.py", line 254, in predict        
    return self.predictor.predict_cli(source=source) if is_cli else self.predictor(source=source, stream=stream)  
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\engine\predictor.py", line 200, in predict_cli
    for _ in gen:  # running CLI inference without accumulating any outputs (do not modify)                       
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\torch\utils\_contextlib.py", line 35, in generator_context
    response = gen.send(None)
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\engine\predictor.py", line 255, in stream_inference
    self.results = self.postprocess(preds, im, im0s)
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\models\yolo\detect\predict.py", line 14, in postprocess
    preds = ops.non_max_suppression(preds,
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\ultralytics\utils\ops.py", line 261, in non_max_suppression
    i = torchvision.ops.nms(boxes, scores, iou_thres)  # NMS
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\torchvision\ops\boxes.py", line 41, in nms
    return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
  File "D:\Anaconda3\envs\ultralytics\lib\site-packages\torch\_ops.py", line 502, in __call__
    return self._op(*args, **kwargs or {})
NotImplementedError: Could not run 'torchvision::nms' with arguments from the 'CUDA' backend. This could be because the operator doesn't exi
st for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using 
PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'torchvision::nms' is only available for these backends
: [CPU, QuantizedCPU, BackendSelect, Python, FuncTorchDynamicLayerBackMode, Functionalize, Named, Conjugate, Negative, ZeroTensor, ADInplace
OrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradMPS, AutogradXPU, AutogradHPU, AutogradLazy, AutogradMeta, Tracer, Au
tocastCPU, AutocastCUDA, FuncTorchBatched, FuncTorchVmapMode, Batched, VmapMode, FuncTorchGradWrapper, PythonTLSSnapshot, FuncTorchDynamicLa
yerFrontMode, PythonDispatcher].

CPU: registered at C:\b\abs_61prww4bv9\croot\torchvision_1689079992237\work\torchvision\csrc\ops\cpu\nms_kernel.cpp:112 [kernel]
QuantizedCPU: registered at C:\b\abs_61prww4bv9\croot\torchvision_1689079992237\work\torchvision\csrc\ops\quantized\cpu\qnms_kernel.cpp:124 
[kernel]
BackendSelect: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\BackendSelectFallbackKernel.cpp:3 [backend fall
back]
Python: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:144 [backend fallback]
FuncTorchDynamicLayerBackMode: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\DynamicLayer.cpp:491 [backend fallback
]
Functionalize: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\FunctionalizeFallbackKernel.cpp:280 [backend fallback]
Named: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ConjugateFallback.cpp:17 [backend fallback]
Negative: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\native\NegateFallback.cpp:19 [backend fallback]
ZeroTensor: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\ZeroTensorFallback.cpp:86 [backend fallback]
ADInplaceOrView: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:63 [backend fallba
ck]
AutogradOther: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:30 [backend fallback
]
AutogradCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:34 [backend fallback] 
AutogradCUDA: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:42 [backend fallback]
AutogradXLA: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:46 [backend fallback] 
AutogradMPS: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:54 [backend fallback] 
AutogradXPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:38 [backend fallback] 
AutogradHPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:67 [backend fallback] 
AutogradLazy: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:50 [backend fallback]
AutogradMeta: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\VariableFallbackKernel.cpp:58 [backend fallback]
Tracer: registered at C:\cb\pytorch_1000000000000\work\torch\csrc\autograd\TraceTypeManual.cpp:294 [backend fallback]
AutocastCPU: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:487 [backend fallback]
AutocastCUDA: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\autocast_mode.cpp:354 [backend fallback]
FuncTorchBatched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\LegacyBatchingRegistrations.cpp:815 [backend fallba
ck]
FuncTorchVmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\VmapModeRegistrations.cpp:28 [backend 
fallback]
Batched: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\LegacyBatchingRegistrations.cpp:1073 [backend fallback]
VmapMode: fallthrough registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
FuncTorchGradWrapper: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\TensorWrapper.cpp:210 [backend fallback]       
PythonTLSSnapshot: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:152 [backend fallback]        
FuncTorchDynamicLayerFrontMode: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\functorch\DynamicLayer.cpp:487 [backend fallbac
k]
PythonDispatcher: registered at C:\cb\pytorch_1000000000000\work\aten\src\ATen\core\PythonFallbackKernel.cpp:148 [backend fallback]


恳请各位解答

  • 写回答

2条回答 默认 最新

  • 网创学长 上海途途珺文化传媒有限公司官方账号 2023-07-22 10:09
    关注

    报错信息显示是因为无法在CUDA后端运行'torchvision::nms'操作。这可能是因为该操作在此后端不可用,或者在选择性/自定义构建过程中被忽略(如果使用自定义构建)。根据错误信息,你正在使用NVIDIA GeForce GTX 1660 Ti with Max-Q Design GPU,但是缺少所需的算子。

    解决这个问题的一种方法是更新你的PyTorch和torchvision版本。请确保你安装了与你的CUDA版本兼容的最新版本的PyTorch和torchvision,并使用合适的CUDA工具包进行编译。

    另外,你还可以尝试用CPU而不是GPU来运行代码。只需将CUDA设备设置为CPU即可解决此问题。你可以通过修改代码中的CUDA:0为CPU来实现。

    如果上述方法仍然无效,你可能需要检查并确保你正确地安装了yolov8,并将所有必要的依赖项安装完整。还可以查看yolov8的文档或寻求相关社区支持以获取更多帮助。

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

问题事件

  • 系统已结题 7月31日
  • 已采纳回答 7月23日
  • 创建了问题 7月22日

悬赏问题

  • ¥15 远程访问linux主机超时
  • ¥15 odoo17存货管理优势于中国国内该行业传统ERP或MES的详细解读和举例
  • ¥15 CPU卡指令整合指令数据都在图片上
  • ¥15 odoo17处理受托加工产品
  • ¥15 如何用MATLAB编码图三的积分
  • ¥15 圆孔衍射光强随孔径变化
  • ¥15 MacBook pro m3max上用vscode运行c语言没有反应
  • ¥15 ESP-PROG配置错误,ALL ONES
  • ¥15 结构功能耦合指标计算
  • ¥50 AI大模型精调(百度千帆、飞浆)