douzhi1879 2019-04-05 16:36
浏览 914
已采纳

Postgres:优化并发的同一行更新

THE PROBLEM

I'm working with PostgreSQL v10 + golang and have what I believe to be a very common SQL problem:

  • I've a table 'counters', that has a current_value and a max_value integer columns.
  • Strictly, once current_value >= max_value, I would like to drop the request.
  • I've several Kubernetes pods that for each API call might increment current_value of the same row (in the worst case) in 'counters' table by 1 (can be thought of as concurrent updates to the same DB from distributed hosts).

In my current and naive implementation, multiple UPDATES to the same row naturally block each other (the isolation level is 'read committed' if that matters). In the worst case, I have about 10+ requests per second that would update the same row. That creates a bottle neck and hurts performance, which I cannot afford.


POSSIBLE SOLUTION

I thought of several ideas to resolve this, but they all sacrify integrity or performance. The only one that keeps both doesn't sound very clean, for this seemingly common problem:

As long as the counter current_value is within relatively safe distance from max_value (delta > 100), send the update request to a channel that would be flushed every second or so by a worker that would aggregate the updates and request them at once. Otherwise (delta <= 100), do the update in the context of the transaction (and hit the bottleneck, but for a minority of cases). This will pace the update requests up until the point that the limit is almost reached, effectively resolving the bottleneck.


This would probably work for resolving my problem. However, I can't help but think that there are better ways to address this.

I didn't find a great solution online and even though my heuristic method would work, it feels unclean and it lacks integrity.

Creative solutions are very welcome!


Edit:

Thanks to @laurenz-albe advice, I tried to shorten the duration between the UPDATE where the row gets locked to the COMMIT of the transaction. Pushing all UPDATES to the end of the transaction seems to have done the trick. Now I can process over 100 requests/second and maintain integrity!

  • 写回答

1条回答 默认 最新

  • duan0414 2019-04-05 20:07
    关注

    10 concurrent updates per second is ridiculously little. Just make sure that the transactions are as short as possible, and it won't be a problem.

    Your biggest problem will be VACUUM, as lots of updates are the worst possible workload for PostgreSQL. Make sure you create the table with a fillfactor of 70 or so and the current_value is not indexed, so that you get HOT updates.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 基于51单片机的厨房煤气泄露检测报警系统设计
  • ¥15 路易威登官网 里边的参数逆向
  • ¥15 Arduino无法同时连接多个hx711模块,如何解决?
  • ¥50 需求一个up主付费课程
  • ¥20 模型在y分布之外的数据上预测能力不好如何解决
  • ¥15 processing提取音乐节奏
  • ¥15 gg加速器加速游戏时,提示不是x86架构
  • ¥15 python按要求编写程序
  • ¥15 Python输入字符串转化为列表排序具体见图,严格按照输入
  • ¥20 XP系统在重新启动后进不去桌面,一直黑屏。