douzhi1879 2019-04-05 16:36
浏览 920
已采纳

Postgres:优化并发的同一行更新

THE PROBLEM

I'm working with PostgreSQL v10 + golang and have what I believe to be a very common SQL problem:

  • I've a table 'counters', that has a current_value and a max_value integer columns.
  • Strictly, once current_value >= max_value, I would like to drop the request.
  • I've several Kubernetes pods that for each API call might increment current_value of the same row (in the worst case) in 'counters' table by 1 (can be thought of as concurrent updates to the same DB from distributed hosts).

In my current and naive implementation, multiple UPDATES to the same row naturally block each other (the isolation level is 'read committed' if that matters). In the worst case, I have about 10+ requests per second that would update the same row. That creates a bottle neck and hurts performance, which I cannot afford.


POSSIBLE SOLUTION

I thought of several ideas to resolve this, but they all sacrify integrity or performance. The only one that keeps both doesn't sound very clean, for this seemingly common problem:

As long as the counter current_value is within relatively safe distance from max_value (delta > 100), send the update request to a channel that would be flushed every second or so by a worker that would aggregate the updates and request them at once. Otherwise (delta <= 100), do the update in the context of the transaction (and hit the bottleneck, but for a minority of cases). This will pace the update requests up until the point that the limit is almost reached, effectively resolving the bottleneck.


This would probably work for resolving my problem. However, I can't help but think that there are better ways to address this.

I didn't find a great solution online and even though my heuristic method would work, it feels unclean and it lacks integrity.

Creative solutions are very welcome!


Edit:

Thanks to @laurenz-albe advice, I tried to shorten the duration between the UPDATE where the row gets locked to the COMMIT of the transaction. Pushing all UPDATES to the end of the transaction seems to have done the trick. Now I can process over 100 requests/second and maintain integrity!

  • 写回答

1条回答 默认 最新

  • duan0414 2019-04-05 20:07
    关注

    10 concurrent updates per second is ridiculously little. Just make sure that the transactions are as short as possible, and it won't be a problem.

    Your biggest problem will be VACUUM, as lots of updates are the worst possible workload for PostgreSQL. Make sure you create the table with a fillfactor of 70 or so and the current_value is not indexed, so that you get HOT updates.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥30 Matlab打开默认名称带有/的光谱数据
  • ¥50 easyExcel模板 动态单元格合并列
  • ¥15 res.rows如何取值使用
  • ¥15 在odoo17开发环境中,怎么实现库存管理系统,或独立模块设计与AGV小车对接?开发方面应如何设计和开发?请详细解释MES或WMS在与AGV小车对接时需完成的设计和开发
  • ¥15 CSP算法实现EEG特征提取,哪一步错了?
  • ¥15 游戏盾如何溯源服务器真实ip?需要30个字。后面的字是凑数的
  • ¥15 vue3前端取消收藏的不会引用collectId
  • ¥15 delphi7 HMAC_SHA256方式加密
  • ¥15 关于#qt#的问题:我想实现qcustomplot完成坐标轴
  • ¥15 下列c语言代码为何输出了多余的空格