I have a microservice architecture application with multiple services polling an external API. The external API has a rate limiter of 600 requests per minute. How can I have all of my instances together stay below the shared 600 rate limit?
Google only brought me 3 solutions, the most promising being:
- myntra/golimit the most promising of the three, but I literally do not have a clue how to set that up.
-
wallstreetcn/rate which only seems to reject when limit has been reached (my app needs to wait till it can make the request) and the
Every
function in therate.NewLimiter
func seems to be a different import / dependency which i cannot figure out which it is - manavo/go-rate-limiter has a "soft" limit which, obviously, could get me over the limit. Some endpoints I dont really mind if I cant access them for a few seconds, but other endpoint requests should work as much as possible.
Currently I have an amateur solution. The code below allows to me set a limit per minute and it sleeps in between requests to spread the requests over the minute. This client rate limit is per instance, thus I would have to hardcode divide the 600 requests by the amount of instances.
var semaphore = make(chan struct{}, 5)
var rate = make(chan struct{}, 10)
func init(){
// leaky bucket
go func() {
ticker := time.NewTicker(100 * time.Millisecond)
defer ticker.Stop()
for range ticker.C {
_, ok := <-rate
// if this isn't going to run indefinitely, signal
// this to return by closing the rate channel.
if !ok {
return
}
}
}()
And inside the function that makes the http API requests.
rate <- struct{}{}
// check the concurrency semaphore
semaphore <- struct{}{}
defer func() {
<-semaphore
}()
How can I have all of my instances together stay below the shared 600 rate limit?
Preferences: - Rate limit counter based on a key, so multiple counters can be set. - Spread the requests over the set duration, so that 600 requests are not sent in the first 30 seconds but rather the full minute duration.