douzhi1879 2019-04-05 16:36
浏览 898
已采纳

Postgres:优化并发的同一行更新

THE PROBLEM

I'm working with PostgreSQL v10 + golang and have what I believe to be a very common SQL problem:

  • I've a table 'counters', that has a current_value and a max_value integer columns.
  • Strictly, once current_value >= max_value, I would like to drop the request.
  • I've several Kubernetes pods that for each API call might increment current_value of the same row (in the worst case) in 'counters' table by 1 (can be thought of as concurrent updates to the same DB from distributed hosts).

In my current and naive implementation, multiple UPDATES to the same row naturally block each other (the isolation level is 'read committed' if that matters). In the worst case, I have about 10+ requests per second that would update the same row. That creates a bottle neck and hurts performance, which I cannot afford.


POSSIBLE SOLUTION

I thought of several ideas to resolve this, but they all sacrify integrity or performance. The only one that keeps both doesn't sound very clean, for this seemingly common problem:

As long as the counter current_value is within relatively safe distance from max_value (delta > 100), send the update request to a channel that would be flushed every second or so by a worker that would aggregate the updates and request them at once. Otherwise (delta <= 100), do the update in the context of the transaction (and hit the bottleneck, but for a minority of cases). This will pace the update requests up until the point that the limit is almost reached, effectively resolving the bottleneck.


This would probably work for resolving my problem. However, I can't help but think that there are better ways to address this.

I didn't find a great solution online and even though my heuristic method would work, it feels unclean and it lacks integrity.

Creative solutions are very welcome!


Edit:

Thanks to @laurenz-albe advice, I tried to shorten the duration between the UPDATE where the row gets locked to the COMMIT of the transaction. Pushing all UPDATES to the end of the transaction seems to have done the trick. Now I can process over 100 requests/second and maintain integrity!

  • 写回答

1条回答 默认 最新

  • duan0414 2019-04-05 20:07
    关注

    10 concurrent updates per second is ridiculously little. Just make sure that the transactions are as short as possible, and it won't be a problem.

    Your biggest problem will be VACUUM, as lots of updates are the worst possible workload for PostgreSQL. Make sure you create the table with a fillfactor of 70 or so and the current_value is not indexed, so that you get HOT updates.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

    报告相同问题?

    悬赏问题

    • ¥15 MICE包多重插补后数据集汇总导出
    • ¥15 一道算法分析问题(关于3-MSAT)
    • ¥15 C++ FLUENT 化学反应速率 编写困难
    • ¥15 Python嵌套交叉验证
    • ¥15 linuxkit+elasticsearch
    • ¥15 兄得萌6.13do题😭😭大一小东西的work
    • ¥15 投不到原始数据,gdal投影代码
    • ¥20 卷积混响的代码帮写。。
    • ¥88 借助代码处理雷达影像,识别任意区域洪水前后的被淹没区域,并可视化展示。
    • ¥100 提问关于声学两个频率合成后主观听觉问题