duancan9815 2014-05-10 16:20
浏览 373
已采纳

使用goroutine时HTTP GET请求的时间响应

I have a simple code that prints GET response time for each URL listed in a text file (url_list.txt).

When the requests are fired sequentially the returned times correspond to the expected response times of individual URLs.

However, when the same code is executed concurrently the returned response times are typically higher than expected.

It seems that the time_start I capture before the http.Get(url) is called is not the time of when the request is actually sent. I guess the execution of http.Get(url) is queued to some extend.

Is there a better way to capture URL response time when using goroutines?

Here is my code:

Sequential requests:

package main

import ("fmt"
        "net/http"
        "io/ioutil"
        "time"
        "strings"
)

func get_resp_time(url string) {
        time_start := time.Now()
        resp, err := http.Get(url)
        if err != nil {
            panic(err)
        }
        defer resp.Body.Close()
        fmt.Println(time.Since(time_start), url)
}

func main() {
    content, _ := ioutil.ReadFile("url_list.txt")
    urls := strings.Split(string(content), "
")

    for _, url := range urls {
        get_resp_time(url)
        //go get_resp_time(url)
    }

    //time.Sleep(20 * time.Second)
}

Concurrent requests:

package main

import ("fmt"
        "net/http"
        "io/ioutil"
        "time"
        "strings"
)

func get_resp_time(url string) {
        time_start := time.Now()
        resp, err := http.Get(url)
        if err != nil {
            panic(err)
        }
        defer resp.Body.Close()
        fmt.Println(time.Since(time_start), url)
}

func main() {
    content, _ := ioutil.ReadFile("url_list.txt")
    urls := strings.Split(string(content), "
")

    for _, url := range urls {
        //get_resp_time(url)
        go get_resp_time(url)
    }

    time.Sleep(20 * time.Second)
} 
  • 写回答

1条回答 默认 最新

  • doucu7525 2014-05-10 20:06
    关注

    You are starting all the requests at once. If there are 1000s of urls in the file then you are starting 1000s of go routines all at once. This may work, but may give you errors about being out of sockets or file handles. I'd recommend starting a limited number of fetches at once, like this code below.

    This should help with the timing also.

    package main
    
    import (
        "fmt"
        "io/ioutil"
        "log"
        "net/http"
        "strings"
        "sync"
        "time"
    )
    
    func get_resp_time(url string) {
        time_start := time.Now()
        resp, err := http.Get(url)
        if err != nil {
            log.Printf("Error fetching: %v", err)
        }
        defer resp.Body.Close()
        fmt.Println(time.Since(time_start), url)
    }
    
    func main() {
        content, _ := ioutil.ReadFile("url_list.txt")
        urls := strings.Split(string(content), "
    ")
    
        const workers = 25
    
        wg := new(sync.WaitGroup)
        in := make(chan string, 2*workers)
    
        for i := 0; i < workers; i++ {
            wg.Add(1)
            go func() {
                defer wg.Done()
                for url := range in {
                    get_resp_time(url)
                }
            }()
        }
    
        for _, url := range urls {
            if url != "" {
                in <- url
            }
        }
        close(in)
        wg.Wait()
    }
    
    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 想问一下树莓派接上显示屏后出现如图所示画面,是什么问题导致的
  • ¥100 嵌入式系统基于PIC16F882和热敏电阻的数字温度计
  • ¥15 cmd cl 0x000007b
  • ¥20 BAPI_PR_CHANGE how to add account assignment information for service line
  • ¥500 火焰左右视图、视差(基于双目相机)
  • ¥100 set_link_state
  • ¥15 虚幻5 UE美术毛发渲染
  • ¥15 CVRP 图论 物流运输优化
  • ¥15 Tableau online 嵌入ppt失败
  • ¥100 支付宝网页转账系统不识别账号