I am trying to write a two-level cache (memory + redis), but I got a bottleneck when there is a high concurrency access for one key, I tried to use mutex for each key, but this increased cpu a lot since loadFromDB takes 100-200ms.
func (s *Store) GetJsonObjectWithExpire(key string, obj interface{}, ttl int, f StoreLoadFunc) error {
// first read from memory
v, ok := s.mem.Get(key)
if ok {
if v.Outdated() {
to := deepcopy.Copy(obj)
go s.updateMem(key, to, ttl, f)
}
return nil
}
// if missed in memory, check from redis
v, ok = s.rds.Get(key, obj)
if ok {
if v.Outdated() {
go s.rds.loadFromDB(key, nil, ttl, f, false)
}
return nil
}
return s.rds.loadFromDB(key, obj, ttl, f, true)
}
loadFromDB will load object from db and write it to redis and memory, this will take about 100-200ms, both loadFromDB and rds.Get function use per-key RWMutex right now.
since there will be a lot of keys(10000+),I am not sure if it is a good idea to use a mutex for each key?
Is there any thing that I can do to improve the performance?
UPDATE: Here is my code on github for my two-level cache-aside pattern implementation.