dongwen6743 2019-06-30 20:09
浏览 49
已采纳

分布式出站http速率限制器

I have a microservice architecture application with multiple services polling an external API. The external API has a rate limiter of 600 requests per minute. How can I have all of my instances together stay below the shared 600 rate limit?

Google only brought me 3 solutions, the most promising being:

  • myntra/golimit the most promising of the three, but I literally do not have a clue how to set that up.
  • wallstreetcn/rate which only seems to reject when limit has been reached (my app needs to wait till it can make the request) and the Every function in the rate.NewLimiter func seems to be a different import / dependency which i cannot figure out which it is
  • manavo/go-rate-limiter has a "soft" limit which, obviously, could get me over the limit. Some endpoints I dont really mind if I cant access them for a few seconds, but other endpoint requests should work as much as possible.

Currently I have an amateur solution. The code below allows to me set a limit per minute and it sleeps in between requests to spread the requests over the minute. This client rate limit is per instance, thus I would have to hardcode divide the 600 requests by the amount of instances.

var semaphore = make(chan struct{}, 5)
var rate = make(chan struct{}, 10)

func init(){
    // leaky bucket
    go func() {
        ticker := time.NewTicker(100 * time.Millisecond)
        defer ticker.Stop()
        for range ticker.C {
            _, ok := <-rate
            // if this isn't going to run indefinitely, signal
            // this to return by closing the rate channel.
            if !ok {
                return
            }
        }
}()

And inside the function that makes the http API requests.

rate <- struct{}{}

    // check the concurrency semaphore
    semaphore <- struct{}{}
    defer func() {
        <-semaphore
}()

How can I have all of my instances together stay below the shared 600 rate limit?

Preferences: - Rate limit counter based on a key, so multiple counters can be set. - Spread the requests over the set duration, so that 600 requests are not sent in the first 30 seconds but rather the full minute duration.

  • 写回答

2条回答 默认 最新

  • dongzi9196 2019-06-30 23:21
    关注

    if you want a global rate limiter, you need a place to maintain distributed state, such as zookeeper. Usually, we don't want to pay the overhead. Alternatively, you can set up a forward proxy (https://golang.org/pkg/net/http/httputil/#ReverseProxy), do rate limit in it.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥30 关于用python写支付宝扫码付异步通知收不到的问题
  • ¥50 vue组件中无法正确接收并处理axios请求
  • ¥15 隐藏系统界面pdf的打印、下载按钮
  • ¥15 MATLAB联合adams仿真卡死如何解决(代码模型无问题)
  • ¥15 基于pso参数优化的LightGBM分类模型
  • ¥15 安装Paddleocr时报错无法解决
  • ¥15 python中transformers可以正常下载,但是没有办法使用pipeline
  • ¥50 分布式追踪trace异常问题
  • ¥15 人在外地出差,速帮一点点
  • ¥15 如何使用canvas在图片上进行如下的标注,以下代码不起作用,如何修改