I have a nosql database rows of two types:
- Rows that are essentially counters with a high number of updates per second. It doesn't matter if these updates are done in a batch once every n seconds (where n is say 2 seconds).
- Rows that contain tree-like structures, and each time the row is updated the tree structure has to be updated. Updating the tree structure each time is expensive, it would be better to do it as a batch job once every n seconds.
This is my plan and then I will explain the part I am struggling to execute and whether I need to move to something like RabbitMQ.
Each row has a unique id which I use as the key for redis. Redis can easily do loads of counter increments no problem. As for the tree structure, each update for the row can use the string append command to appen json instructions on how to modify the existing tree in the database.
This is the tricky part
I want to ensure each row gets updated every n seconds. There will be a large amount of redis keys getting updated.
This was my plan. Have three queues: pre-processing, processing, dead
By default every key is placed in the pre-processing queue when the command for a database update comes in. After exactly n seconds move each key/value which has been there for n seconds to the processing queue (don't know how to do this efficiently and concurrently). Now n seconds have passed, it doesn't matter which order the processing queue is done in and I can have any consumers racing through them. And I will have a dead queue in case tasks keep failing for some reason.
Is there a better way to do this? Is what I am thinking of possible?