doufeikuang7724 2019-02-21 20:55
浏览 68

HAProxy-与直接查询单个Web服务器的速度相同

We are trying to setup load balanced servers so we can spread the load over many servers as we grow our app. Our software is being created in Go listening as a web server.

We created a simple server in Go using PING/PONG to see how many requests we can handle per server at max. Yes, we understand that when you add in database access and all of the other processing code, that single server will not reach the same number of transactions per second. This is merely test code to ensure consistent speed on each box to exclude any outside latency sources.

We have a 2U physical box with 64 GB ram and (2) Xeon 4110 8 Core 16 Thread processors. Our Internet connection is 1 GB fiber and all servers are connected internally on virtual lan so latency shouldnt be any issue.

We have (3) Go servers setup, each with 4 GB ram and 4 CPU in a VM using Cent OS 7.

We can achieve 47,000 queries per second on each box. We would assume that if we had HAProxy in front of the boxes we should be able to hit approx 140,000 qps over the 3 servers combined.

We are using the TCP method of routing in HAProxy as it appears to be much faster than the HTTP method. We do not need to route based on URL so TCP seems to be better at the moment.

The 47,000 qps was tested using loader.io with 2000 clients spanning to 4000 clients over a 1 minute period. Each server processes the same amount of qps on average.

We setup HAProxy to connect to 1 server only to see what the speed was having it in the middle. We say about 46,000 qps when hitting the one server.

We added 2 more servers for a total of 3 behind HAProxy and there was no increase in qps, however there was load on all 3 servers spread out as indicated from watching htop on all machines. Total of 46,000 qps was all it would reach.

The HAProxy server was setup with 8 CPU and 16 GB ram and was NOT maxing out on CPU when watching htop.

There is 1 external IP address coming into HAProxy and the backends are linked via internal 10.x.x.x IP addresses, 1 per box. Each back end server also has an external IP address we used to test the speed of each server individually to ensure they all worked at 47,000~ qps.

We did increase loader.io to run 2000 thru 8000 clients per second aimed at the HAProxy server to throw more load at it, however it didn't increase the qps at all.

It appears we have enough power to process the simple ping/pong requests in CPU, RAM and Internet traffic.

Is there a max limit that HAProxy can process per external IP based on port exhaustion? We did increase the ports on the server to 1024 - 65000.

Using watch ss -s doesn't have any more than 10,000 ports used at maximum and there are very few in any wait status as we have set the server to reuse tcp connections as well as reduced the fin timeout.

We are simply trying to have a front end web proxy able to handle a lot of traffic and pass it off to the back end servers to process.

Our Go code currently runs at about 10,000 qps per VM so if we wanted to achieve the 140,000 qps in the above example, we would need 14 VMs to handle it. The goal is to have the capability for allot more than 140,000 and simply add more servers on the back end to handle the increase in load.

  • 写回答

0条回答 默认 最新

    报告相同问题?

    悬赏问题

    • ¥15 c程序不知道为什么得不到结果
    • ¥40 复杂的限制性的商函数处理
    • ¥15 程序不包含适用于入口点的静态Main方法
    • ¥15 素材场景中光线烘焙后灯光失效
    • ¥15 请教一下各位,为什么我这个没有实现模拟点击
    • ¥15 执行 virtuoso 命令后,界面没有,cadence 启动不起来
    • ¥50 comfyui下连接animatediff节点生成视频质量非常差的原因
    • ¥20 有关区间dp的问题求解
    • ¥15 多电路系统共用电源的串扰问题
    • ¥15 slam rangenet++配置