I am working on a tool where I have 300 routines that are downloading a file from a public cloud. All routines are parallely downloading the file block by block (api's supported by cloud). I initially create a file of given size and the memory map it. Now I am reading the response body directly into the memory mapped byte slice using the io.ReadFull api. With this, memory eventually spikes to 100%.
1条回答 默认 最新
douzao5487 2018-10-08 06:37关注As far as I concern, the copy move is just like you allocate a new array and copy elements into it, the memory will be double size of the initial array. By the way, after read data from http response body, you should close it, like:
defer resp.Body.Close()解决 无用评论 打赏 举报