You have heard of Weighted Fair Queues? That is a well developed way to schedule such that a prediction is made about which job should theoretically complete first, and that is the one that is serviced.
I have n ~=1000 jobs running on outside servers, each tied to a go-routine in my program. I started the jobs at different times, and they finish roughly in the order they were started, but that's not guaranteed.
From each go-routine I poll its corresponding server job: is it done yet? My outbound requests are rate-limited, so I need to poll smartly.
I want to prioritize polling by go-routines whose jobs were started earlier. The way I'm doing it now, I have a channel that represents my rate limit, and all go-routines wait to acquire a value from this channel, poll their server, and then put a value back.
But, there's no guarantee that these go-routines would even read randomly (much less in priority order), because the behavior of multiple go-routines reading on the same channel is undefined.
Could someone guide me as to how to think about this problem? It doesn't have to be specific, but I'm not sure what primitives and data structures I would use in Go to read from channels in priority order, while taking into account rate-limiting.
It seems difficult because individual goroutines do not know the state of the whole program -- which of their colleague routines were started first, etc. They should merely be fed whether or not they should poll their server at any given time.