I store series of events in BigTable with the form of:
rowKey | col_1 | col_2
----------------------|-------|------
uuid1!uuid2!timestamp | val1 | val2
....
col_1 holds a float64 and col_2 holds a string 63 characters long.
Specific ranges within this series of events are grouped and are loosely associated to an object we'll call an operation:
{
"id": 123,
"startDate": "2019-07-15T14:02:12.335+02:00",
"endDate": "2019-07-15T14:02:16.335+02:00"
}
So you may say that an operation is a timewindow of events, and may be associated to 10-1000 events.
When I want to display this data to the user, I first query the operation objects, and then I execute a BigTable query for each operation to find the events it covers.
Through monitoring I've discovered that each BigTable (a development instance, mind you) query may take between 20ms to 300ms.
This got me wondering, given BigTable's architecture - does it make sense to execute small, individual queries?
Does it make more sense to execute one big query that covers my range of operations, then divide the events to their respective operations in my application?