I'm writing a application that requires additions of 5000 length float vectors many times a second. Is it possible to make the GPU perform the calculations, an how would that be done? i need it to run on both windows and linux (later a raspberry pi), so CUDA is out of the question as i don't have a Nvidia graphics card.
1条回答 默认 最新
dongqing5575 2017-01-08 04:55关注You can't directly talk to Nvidia GPUs from Go. You'd need to use cgo to call C library from Go. See slide #8 in this presentation for one example (also see full talk).
There're some Go packages that wrap cgo part I mentioned above, into Go a library. mumax is one such package.
解决 无用评论 打赏 举报