I have read that net/http starts a go subroutine for each connection. I have few questions. But I haven't seen any parameter to limit the number of spawned new go subroutines. For example, If I have to handle 1 million concurrent requests per second, what will happen? Do we have any control over spawned go subroutines? If it spawns one go subroutine per connection, won't it choke my entire system? What is the recommended way of handling huge number of concurrent requests for a go webserver? I have to handle both cases of responses being asynchronous and synchronous.
1条回答 默认 最新
- dongzipu7517 2016-06-07 23:46关注
Job/Worker pattern is a well common go concurrency pattern suited for this task.
Multiple goroutines can read from a single channel, distributing an amount of work between CPU cores, hence the workers name. In Go, this pattern is easy to implement - just start a number of goroutines with channel as parameter, and just send values to that channel - distributing and multiplexing will be done by Go runtime.
package main import ( "fmt" "sync" "time" ) func worker(tasksCh <-chan int, wg *sync.WaitGroup) { defer wg.Done() for { task, ok := <-tasksCh if !ok { return } d := time.Duration(task) * time.Millisecond time.Sleep(d) fmt.Println("processing task", task) } } func pool(wg *sync.WaitGroup, workers, tasks int) { tasksCh := make(chan int) for i := 0; i < workers; i++ { go worker(tasksCh, wg) } for i := 0; i < tasks; i++ { tasksCh <- i } close(tasksCh) } func main() { var wg sync.WaitGroup wg.Add(36) go pool(&wg, 36, 50) wg.Wait() }
All goroutines run in parallel, waiting for channel to give them work. The goroutines receive their work almost immediately one after another.
Here is a great article about how you can handle 1 million requests per minute in go: http://marcio.io/2015/07/handling-1-million-requests-per-minute-with-golang/
本回答被题主选为最佳回答 , 对您是否有帮助呢?解决 无用评论 打赏 举报