RuntimeError: CUDA out of memory. Tried to allocate 446.00 MiB (GPU 0; 2.00 GiB total capacity; 1.34 GiB already allocated; 0 bytes free; 1.35 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
已加with torch.no_grad():还是没有解决,是不是必须要搭一个服务器才可以运行
有无其他解决方法
RuntimeError: CUDA out of memory.
- 写回答
- 好问题 0 提建议
- 关注问题
- 邀请回答
-