dongwen6743 2019-06-30 20:09
浏览 49
已采纳

分布式出站http速率限制器

I have a microservice architecture application with multiple services polling an external API. The external API has a rate limiter of 600 requests per minute. How can I have all of my instances together stay below the shared 600 rate limit?

Google only brought me 3 solutions, the most promising being:

  • myntra/golimit the most promising of the three, but I literally do not have a clue how to set that up.
  • wallstreetcn/rate which only seems to reject when limit has been reached (my app needs to wait till it can make the request) and the Every function in the rate.NewLimiter func seems to be a different import / dependency which i cannot figure out which it is
  • manavo/go-rate-limiter has a "soft" limit which, obviously, could get me over the limit. Some endpoints I dont really mind if I cant access them for a few seconds, but other endpoint requests should work as much as possible.

Currently I have an amateur solution. The code below allows to me set a limit per minute and it sleeps in between requests to spread the requests over the minute. This client rate limit is per instance, thus I would have to hardcode divide the 600 requests by the amount of instances.

var semaphore = make(chan struct{}, 5)
var rate = make(chan struct{}, 10)

func init(){
    // leaky bucket
    go func() {
        ticker := time.NewTicker(100 * time.Millisecond)
        defer ticker.Stop()
        for range ticker.C {
            _, ok := <-rate
            // if this isn't going to run indefinitely, signal
            // this to return by closing the rate channel.
            if !ok {
                return
            }
        }
}()

And inside the function that makes the http API requests.

rate <- struct{}{}

    // check the concurrency semaphore
    semaphore <- struct{}{}
    defer func() {
        <-semaphore
}()

How can I have all of my instances together stay below the shared 600 rate limit?

Preferences: - Rate limit counter based on a key, so multiple counters can be set. - Spread the requests over the set duration, so that 600 requests are not sent in the first 30 seconds but rather the full minute duration.

  • 写回答

2条回答 默认 最新

  • dongzi9196 2019-06-30 23:21
    关注

    if you want a global rate limiter, you need a place to maintain distributed state, such as zookeeper. Usually, we don't want to pay the overhead. Alternatively, you can set up a forward proxy (https://golang.org/pkg/net/http/httputil/#ReverseProxy), do rate limit in it.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥15 求京东批量付款能替代天诚
  • ¥15 slaris 系统断电后,重新开机后一直自动重启
  • ¥15 51寻迹小车定点寻迹
  • ¥15 谁能帮我看看这拒稿理由啥意思啊阿啊
  • ¥15 关于vue2中methods使用call修改this指向的问题
  • ¥15 idea自动补全键位冲突
  • ¥15 请教一下写代码,代码好难
  • ¥15 iis10中如何阻止别人网站重定向到我的网站
  • ¥15 滑块验证码移动速度不一致问题
  • ¥15 Utunbu中vscode下cern root工作台中写的程序root的头文件无法包含