I have written a function that uses http.Get() to fetch files from the Nginx server (which is on the same host as the one fetching files just for test). The code is like this:
res,err := http.Get(addr) // addr is the file address on Nginx server
defer res.Body.Close()
for {
v := &vFile{path,0} // path is the file path to write
bv :=bufio.NewWriterSize(v,1024*1024) // Write 1MB each time
_, err:= io.Copy(bv,res.Body)
if err == nil { err = bv.Flush() }
}
with vFile and the Write function defined like
type vFile struct {
path string
cur int64
}
func (wtr *vFile) Write(buf []byte) {
var f *os.File
if wtr.cur == 0 { f,wtr.err = os.Create(wtr.path) }
else { f,wtr.err = os.OpenFile(wtr.path,os.O_RDWR|os.O_APPEND,0666) }
_, err := f.WriteAt(buf,twr.path)
}
However, in high concurrency situation (for example, the concurrency number is 500), a large number of files are not fully fetched, and in Nginx log the HTTP response is 200 while the file length is not correct:
"GET /videos/4b42d6e8e138233c7eb62939.mp4 HTTP/1.1" 200 37863424 "-" "Go 1.1 package http" "-"
The file has a size of 75273523 Bytes while only 37863424 Bytes are fetched. If I change the size from 1MB to 32KB, the situation can be much better, but there is still a few files that is not complete. So what might be wrong with the code?