2016-06-08 07:32 阅读 31

Golang HTTP服务器实现

I have read that net/http starts a go subroutine for each connection. I have few questions. But I haven't seen any parameter to limit the number of spawned new go subroutines. For example, If I have to handle 1 million concurrent requests per second, what will happen? Do we have any control over spawned go subroutines? If it spawns one go subroutine per connection, won't it choke my entire system? What is the recommended way of handling huge number of concurrent requests for a go webserver? I have to handle both cases of responses being asynchronous and synchronous.

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享

1条回答 默认 最新

  • 已采纳
    dongzipu7517 dongzipu7517 2016-06-08 07:46

    Job/Worker pattern is a well common go concurrency pattern suited for this task.

    Multiple goroutines can read from a single channel, distributing an amount of work between CPU cores, hence the workers name. In Go, this pattern is easy to implement - just start a number of goroutines with channel as parameter, and just send values to that channel - distributing and multiplexing will be done by Go runtime.

    package main
    import (
    func worker(tasksCh <-chan int, wg *sync.WaitGroup) {
        defer wg.Done()
        for {
            task, ok := <-tasksCh
            if !ok {
            d := time.Duration(task) * time.Millisecond
            fmt.Println("processing task", task)
    func pool(wg *sync.WaitGroup, workers, tasks int) {
        tasksCh := make(chan int)
        for i := 0; i < workers; i++ {
            go worker(tasksCh, wg)
        for i := 0; i < tasks; i++ {
            tasksCh <- i
    func main() {
        var wg sync.WaitGroup
        go pool(&wg, 36, 50)

    All goroutines run in parallel, waiting for channel to give them work. The goroutines receive their work almost immediately one after another.

    Here is a great article about how you can handle 1 million requests per minute in go:

    点赞 评论 复制链接分享