duanjing4623 2017-11-20 05:43
浏览 33
已采纳

Go中有限的并发连接

I have the following basic http server in Go. For every incoming request it posts 5 outgoing http requests. Each of them roughly take 3-5 seconds. I am not able to achieve more than 200 requests/second on 8 gig Ram, quad core machine.

package main

import (
    "flag"
    "fmt"
    "net/http"
    _"net/url"
    //"io/ioutil"
    "time"
    "log"
    "sync"
    //"os"
    "io/ioutil"
)

// Job holds the attributes needed to perform unit of work.
type Job struct {
    Name  string
    Delay time.Duration
}

func requestHandler(w http.ResponseWriter, r *http.Request) {
    // Make sure we can only be called with an HTTP POST request.
    fmt.Println("in request handler")
    if r.Method != "POST" {
        w.Header().Set("Allow", "POST")
        w.WriteHeader(http.StatusMethodNotAllowed)
        return
    }

    // Set name and validate value.
    name := r.FormValue("name")
    if name == "" {
        http.Error(w, "You must specify a name.", http.StatusBadRequest)
        return
    }

    delay := time.Second * 0

    // Create Job and push the work onto the jobQueue.
    job := Job{Name: name, Delay: delay}
    //jobQueue <- job

    fmt.Println("creating worker")
    result := naiveWorker(name, job)
    fmt.Fprintf(w, "your task %s has been completed ,here are the results : %s", job.Name, result)

}

func naiveWorker(id string, job Job) string {
    var wg sync.WaitGroup
    responseCounter := 0;
    totalBodies := "";
    fmt.Printf("worker%s: started %s
", id, job.Name)

    var urls = []string{
        "https://someurl1",
        "https://someurl2",
        "https://someurl3",
        "https://someurl4",
        "https://someurl5",
    }

    for _, url := range urls {
        // Increment the WaitGroup counter.

        wg.Add(1)
        // Launch a goroutine to fetch the URL.
        go func(url string) {

            // Fetch the URL.
            resp, err := http.Get(url)
            if err != nil {
                fmt.Printf("got an error")
                //  panic(err)

            } else {
                defer resp.Body.Close()
                body, err := ioutil.ReadAll(resp.Body)
                if err != nil {
                    totalBodies += string(body)
                }
            }
            responseCounter ++
            // Decrement the counter when the goroutine completes.
            defer wg.Done()

        }(url)
    }
    wg.Wait()
    fmt.Printf("worker%s: completed %s with %d calls
", id, job.Name, responseCounter)
    return totalBodies
}

func main() {
    var (
        port = flag.String("port", "8181", "The server port")
    )
    flag.Parse()

    // Start the HTTP handler.
    http.HandleFunc("/work", func(w http.ResponseWriter, r *http.Request) {
        requestHandler(w, r)
    })
    log.Fatal(http.ListenAndServe(":" + *port, nil))
}

I have the following questions:

  1. The http connections get reset when number of concurrent threads go above 1000. Is this acceptable/intended behaviour?

  2. if I write go requestHandler(w,r) instead of requestHandler(w,r) I get http: multiple response.WriteHeader calls

  • 写回答

2条回答 默认 最新

  • dpw70180 2017-11-20 14:07
    关注

    An http handler is expected to run synchronously, because the return of the handler function signals the end of the request. Accessing the http.Request and http.ResponseWriter after the handler returns is not valid, so there is no reason to dispatch the handler in a goroutine.

    As the comments have noted, you can't open more file descriptors than the process ulimit allows. Besides increasing the ulimit appropriately, you should have a limit on the number of concurrent requests that can be dispatched at once.

    If you're making many connections to the same hosts, you should also adjust your http.Transport accordingly. The default idle connection per host is only 2, so if you need more than 2 concurrent connections to that host, the new connections won't be reused. See Go http.Get, concurrency, and "Connection reset by peer"

    If you connect to many different hosts, setting Transport.IdleConnTimeout is a good idea to get rid of unused connections.

    And as always, on a long running service you will want to make sure that timeouts are set for everything, so that slow or broken connections don't hold unnecessary resources.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥15 vs2022无法联网
  • ¥15 TCP的客户端和服务器的互联
  • ¥15 VB.NET操作免驱摄像头
  • ¥15 笔记本上移动热点开关状态查询
  • ¥85 类鸟群Boids——仿真鸟群避障的相关问题
  • ¥15 CFEDEM自带算例错误,如何解决?
  • ¥15 有没有会使用flac3d软件的家人
  • ¥20 360摄像头无法解绑使用,请教解绑当前账号绑定问题,
  • ¥15 docker实践项目
  • ¥15 利用pthon计算薄膜结构的光导纳