I'm using https://github.com/coocood/freecache to cache database results, but currently I need to dump bigger chunks on every delete, which costs multiple microseconds extra compared to targeted deletion. fmt.Sprintf("%d_%d_%d")
for a pattern like #SUBJECT_#ID1_#ID2
also costs multiple microseconds. Even tho that doesn't sound like much, in the current ratio of the cache's response time that a multitude slower than it currently is.
I was thinking of using the library's SetInt
/GetInt
which works with int64
keys instead of strings.
So let's say I'm storing in a #SUBJECT_#ID1_#ID2
pattern. The Subject is a table or query-segment-range in my code (e.a. everything concern ACL or Productfiltering).
Let's take an example of Userright.id
is #ID1
and User.id
is #ID2
and Subject ACL
. I would build it as something like this:
// const CACHE_SUBJECT_ACL = 0x1
// var userrightID int64 = 0x1
// var userID int64 = 0x1
var storeKey int64 = 0x1000000101
fmt.Println("Range: ", storeKey&0xff)
fmt.Println("ID1 : ", storeKey&0xfffffff00-0xff)
fmt.Println("ID2 : ", storeKey&0x1fffffff00000000-0xfffffffff)
How can I compile the CACHE_SUBJECT_ACL
/userrightID
/userID
into the storeKey
?
I know I can call userrightID
0x100000001
, but it's a dynamic value so I'm not sure what's the best way to compile this without causing more overhead than formatting the string as a key.
The idea is that in a later state when I need to flush the cache I can call a small range of int64
calls instead of just dumping a whole partition (of maybe thousands of entries).
I was thinking of adding them to each other with bit shifting, like userID<<8
, but I'm not sure if that's the safe route.
If I failed to supply enough information, please ask.