参考了https://blog.csdn.net/weixin_43742643/article/details/116307036
和官方文档https://pytorch.org/cppdocs/notes/tensor_basics.html的访问cuda上tensor的代码,还是没有跑通:
__global__ void packed_accessor_kernel(
PackedTensorAccessor64<float, 2> foo,
float* trace) {
int i=threadIdx.x
gpuAtomicAdd(trace, foo[i][i])
}
torch::Tensor foo = torch::rand({12, 12});
// assert foo is 2-dimensional and holds floats.
auto foo_a = foo.packed_accessor64<float,2>();
float trace = 0;
packed_accessor_kernel<<<1, 12>>>(foo_a, &trace);
有没有哪位大佬跑通了这段代码,麻烦给个示例,谢谢了!