I have an program that uses a buffer pool to reduce allocations in a few performance-sensitive sections of the code.
Something like this: play link
// some file or any data source
var r io.Reader = bytes.NewReader([]byte{1,2,3})
// initialize slice to max expected capacity
dat := make([]byte, 20)
// read some data into it. Trim to length.
n, err := r.Read(dat)
handle(err)
dat = dat[:n]
// now I want to reuse it:
for len(dat) < cap(dat) {
dat = append(dat, 0)
}
log.Println(len(dat))
// add it to free list for reuse later
// bufferPool.Put(dat)
I always allocate fixed length slices, which is guaranteed larger than the maximum size needed. I need to reduce size to the actual data length to use the buffer, but I also need it to be the maximum size again to read into it the next time I need it.
The only way I know of to expand a slice is with append
, so that is what I am using. The loop feels super dirty though. And potentially inefficient. My benchmarks show it isn't horrible, but I feel like there has to be a better way.
I know only a bit about the internal representation of slices, but if I could only somehow override the length value without actually adding data, it would be real nice. I don't really need to zero it out or anything.
Is there a better way to accomplish this?