I'm writing an API client (library) that hits a JSON end-point and populates an in-memory cache.
Thus far:
- I kick off a time.Ticker loop in the library's init() function that hits the API every minute, which refreshes the cache (a struct that embeds the JSON struct and a timestamp).
- The public facing function calls in the library just fetch from the catch and therefore don't need to worry about rate-limiting on their own part, but can check the timestamp if they want to confirm the freshness of the data
However, starting a time.Ticker in init() does not feel quite right: I haven't seen any other libs do this. I do however want to avoid the package user having to do a ton of work just to get data back from few JSON endpoints.
My public API looks like this:
// Example usage:
// rt := api.NewRT()
// err := rt.GetLatest
// tmpl.ExecuteTemplate(w, "my_page.tmpl", M{"results": rt.Data})
func (rt *RealTime) GetLatest() error {
rt = realtimeCache.Cached
if rt == nil {
return errors.New("No cached response is available.")
}
return nil
}
And the internal fetcher is as below:
func fetchLatest() error {
log.Println("Fetching latest RT results.")
resp, err := http.Get(realtimeEndpoint)
if err != nil {
return err
}
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
// Lock our cache from writes
realtimeCache.Lock()
defer realtimeCache.Unlock()
var rt *RealTime
err = json.Unmarshal(body, &rt)
if err != nil {
return err
}
// Update the cache
realtimeCache.Cached = rt
return nil
}
func init() {
// Populate the cache on start-up
fetchLatest()
fetchHistorical()
// Refresh the cache every minute (default)
ticker := time.NewTicker(time.Second * interval)
go func() {
for _ = range ticker.C {
fetchLatest()
fetchHistorical()
}
}()
}
There are similar functions for other parts of the API (which I'm modularising, but I've kept it simple to start with), but this is the gist of it.
Is there a better way to have a background worker fetch results that's still user-friendly?