I've been testing out a simple web server written in Go with http_load. When running the test for 1 second with 100 parallel I've seen 16k requests completed. However, running the test for 10 seconds results in a similar number of requests being completed at around 1/10th of the rate of the 1 second test.
Additionally, if I run several 1 second test close together, the first test will complete 16k requests and the subsequent tests will complete just 100-200 requests.
package main
import "net/http"
func main() {
bytes := make([]byte, 1024)
for i := 0; i < len(bytes); i++ {
bytes[i] = 100
}
http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
w.Write(bytes)
})
http.ListenAndServe(":8000", nil)
}
I'm wondering whether there is any reason as to why the performance would hit a ceiling whilst dealing with this number of requests and whether there is something I have missed in the implementation of the above web server.