torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 176.00 MiB. GPU 0 has a total capacity of 8.00 GiB of which 0 bytes is free. Of the allocated memory 12.71 GiB is allocated by PyTorch, and 1.79 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
遇到这个问题该怎么解决
CUDA out of memory
- 写回答
- 好问题 0 提建议
- 关注问题
- 邀请回答
-
1条回答 默认 最新
Levin(LLM,NLP,CV) 2024-12-09 11:49关注首先,得搞懂错误消息是什么意思:
1、
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 176.00 MiB. GPU 0 has a total capacity of 8.00 GiB of which 0 bytes is free.
torch.OutOfMemoryError: CUDA显存不足。试图分配176.00 MiB。GPU 0总容量8.00 GiB,其中0 B空闲。
2、
Of the allocated memory 12.71 GiB is allocated by PyTorch, and 1.79 GiB is reserved by PyTorch but unallocated.
在已分配的显存当中,PyTorch分配了12.71 GiB,预留了但未分配1.79 GiB。
3、
If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.
如果被预留但未分配的显存较多,尝试设置PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True来避免显存碎片。解决 无用评论 打赏 举报