doufu7464 2015-07-02 01:03
浏览 191
已采纳

去HTTP服务器测试AB vs WRK结果差异很大

I am trying to see how many requests the go HTTP server can handle on my machine so I try to do some test but the difference is so large that I am confused.

First I try to bench with ab and run this command

$ ab -n 100000 -c 1000 http://127.0.0.1/

Doing 1000 concurrent requests.

The result is as follows:

Concurrency Level:      1000
Time taken for tests:   12.055 seconds
Complete requests:      100000
Failed requests:        0
Write errors:           0
Total transferred:      12800000 bytes
HTML transferred:       1100000 bytes
Requests per second:    8295.15 [#/sec] (mean)
Time per request:       120.552 [ms] (mean)
Time per request:       0.121 [ms] (mean, across all concurrent requests)
Transfer rate:          1036.89 [Kbytes/sec] received

8295 requests per second which seems reasonable.

But then I try to run it on wrk with this command:

$ wrk -t1 -c1000 -d5s http://127.0.0.1:80/

And I get these results:

Running 5s test @ http://127.0.0.1:80/
  1 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    18.92ms   13.38ms 234.65ms   94.89%
    Req/Sec    27.03k     1.43k   29.73k    63.27%
  136475 requests in 5.10s, 16.66MB read
Requests/sec:  26767.50
Transfer/sec:      3.27MB

26767 requests per second? I don't understand why there is such a huge difference.

The code run was the simplest Go server

package main

import (
    "net/http"
)

func main() {

    http.HandleFunc("/", func(w http.ResponseWriter, req *http.Request) {
        w.Write([]byte("Hello World"))
    })

    http.ListenAndServe(":80", nil)
}

My goal is to see how many requests the go server can handle as I increase the cores, but this is just too much of a difference before I even start adding more CPU power. Does anyone know how the Go server scales when adding more cores? And also why the huge difference between ab and wrk?

  • 写回答

1条回答 默认 最新

  • duanlan6259 2015-07-02 01:28
    关注

    Firstly: benchmarks are often pretty artificial. Sending back a handful of bytes is going to net you very different results once you start adding database calls, template rendering, session parsing, etc. (expect an order of magnitude difference)

    Then tack on local issues - open file/socket limits on your dev machine vs. production, competition between your benchmarking tool (ab/wrk) and your Go server for those resources, the local loopback adapter or OS TCP stacks (and TCP stack tuning), etc. It goes on!

    In addition:

    • ab is not highly regarded
    • It is HTTP/1.0 only, and therefore doesn't do keepalives
    • Your other metrics vary wildly - e.g. look at the avg latency reported by each tool - ab has a much higher latency
    • Your ab test also runs for 12s and not the 5s your wrk test does.
    • Even 8k req/s is a huge amount of load - that's 28 million requests an hour. Even if—after making a DB call, marshalling a JSON struct, etc—that went down to 3k/req/s you'd still be able to handle significant amounts of load. Don't get too tied up in these kind of benchmarks this early.

    I have no idea what kind of machine you're on, but my iMac with a 3.5GHz i7-4771 can push upwards of 64k req/s on a single thread responding with a w.Write([]byte("Hello World "))

    Short answer: use wrk and keep in mind that benchmarking tools have a lot of variance.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥100 set_link_state
  • ¥15 虚幻5 UE美术毛发渲染
  • ¥15 CVRP 图论 物流运输优化
  • ¥15 Tableau online 嵌入ppt失败
  • ¥100 支付宝网页转账系统不识别账号
  • ¥15 基于单片机的靶位控制系统
  • ¥15 真我手机蓝牙传输进度消息被关闭了,怎么打开?(关键词-消息通知)
  • ¥15 装 pytorch 的时候出了好多问题,遇到这种情况怎么处理?
  • ¥20 IOS游览器某宝手机网页版自动立即购买JavaScript脚本
  • ¥15 手机接入宽带网线,如何释放宽带全部速度