dongzhe3171 2019-03-03 15:31
浏览 60
已采纳

如何处理每小时Bigtable连接关闭?

I have golang services with persistant Bigtable client. The services are making hundred of read/write operation on Bigtable per sec.

Every hours from the service boot, I experience hundred of errors like this one:

Retryable error: rpc error: code = Unavailable desc =
 the connection is draining, retrying in 74.49241ms

The error are followed by an increased processing time I can't allow when thoses errors occur.

I was able to figure out that Bigtable client is using a pool of gRPC connections.

It seems that Bigtable gRPC server have a connection maxAge of 1 hour which can explain the error above and the processing time increasing during reconnection.

A maxAgeGrace configuration is supposed to give additional time to complete current operations and avoid all pool connections to terminate at the same time.

I increased connection pool size from default 4 to 12 with no real benefit

How do I prevent processing time to increase during reconnections and these error to happen, given my traffic will keep growing?

  • 写回答

2条回答 默认 最新

  • drc15469 2019-03-18 14:30
    关注

    Cloud bigtable clients use a pool of gRPC connections to connect to bigtable. Java client uses a channel pool per HBase connection, each channel pool has multiple gRPC connections. gRPC connections are shut down every hour (or after 15 minute of inactivity) and the underlying gRPC infrastructure performs a reconnect. The first request on each new connection performs a number of setup tasks such as TLS handshakes and warming server side caches. These operations are fairly expensive and may cause the latency spikes.

    Bigtable is designed to be a high throughput system and the amortized cost of these reconnections with sustained query volume should be negligible. However, if the client application has very low QPS or long periods of idle time between queries and can not tolerate these latency spikes, it can create a new Hbase connection(java) or a new CBT client(golang) every 30-40 minutes and run no op calls (exist on hbase client or read a small row) on the new connection/client to prime the underlying gRPC connections (one call per connection, for hbase default is twice the number of CPUs, go has 4 connections by default). Once primed you can swap out the new connection/client for the main operations in the client application. Here is sample go code for this workaround.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥15 2024-五一综合模拟赛
  • ¥15 下图接收小电路,谁知道原理
  • ¥15 装 pytorch 的时候出了好多问题,遇到这种情况怎么处理?
  • ¥20 IOS游览器某宝手机网页版自动立即购买JavaScript脚本
  • ¥15 手机接入宽带网线,如何释放宽带全部速度
  • ¥30 关于#r语言#的问题:如何对R语言中mfgarch包中构建的garch-midas模型进行样本内长期波动率预测和样本外长期波动率预测
  • ¥15 ETLCloud 处理json多层级问题
  • ¥15 matlab中使用gurobi时报错
  • ¥15 这个主板怎么能扩出一两个sata口
  • ¥15 不是,这到底错哪儿了😭