dongzhe3171 2019-03-03 15:31
浏览 60
已采纳

如何处理每小时Bigtable连接关闭?

I have golang services with persistant Bigtable client. The services are making hundred of read/write operation on Bigtable per sec.

Every hours from the service boot, I experience hundred of errors like this one:

Retryable error: rpc error: code = Unavailable desc =
 the connection is draining, retrying in 74.49241ms

The error are followed by an increased processing time I can't allow when thoses errors occur.

I was able to figure out that Bigtable client is using a pool of gRPC connections.

It seems that Bigtable gRPC server have a connection maxAge of 1 hour which can explain the error above and the processing time increasing during reconnection.

A maxAgeGrace configuration is supposed to give additional time to complete current operations and avoid all pool connections to terminate at the same time.

I increased connection pool size from default 4 to 12 with no real benefit

How do I prevent processing time to increase during reconnections and these error to happen, given my traffic will keep growing?

  • 写回答

2条回答 默认 最新

  • drc15469 2019-03-18 14:30
    关注

    Cloud bigtable clients use a pool of gRPC connections to connect to bigtable. Java client uses a channel pool per HBase connection, each channel pool has multiple gRPC connections. gRPC connections are shut down every hour (or after 15 minute of inactivity) and the underlying gRPC infrastructure performs a reconnect. The first request on each new connection performs a number of setup tasks such as TLS handshakes and warming server side caches. These operations are fairly expensive and may cause the latency spikes.

    Bigtable is designed to be a high throughput system and the amortized cost of these reconnections with sustained query volume should be negligible. However, if the client application has very low QPS or long periods of idle time between queries and can not tolerate these latency spikes, it can create a new Hbase connection(java) or a new CBT client(golang) every 30-40 minutes and run no op calls (exist on hbase client or read a small row) on the new connection/client to prime the underlying gRPC connections (one call per connection, for hbase default is twice the number of CPUs, go has 4 connections by default). Once primed you can swap out the new connection/client for the main operations in the client application. Here is sample go code for this workaround.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥20 西门子S7-Graph,S7-300,梯形图
  • ¥50 用易语言http 访问不了网页
  • ¥50 safari浏览器fetch提交数据后数据丢失问题
  • ¥15 matlab不知道怎么改,求解答!!
  • ¥15 永磁直线电机的电流环pi调不出来
  • ¥15 用stata实现聚类的代码
  • ¥15 请问paddlehub能支持移动端开发吗?在Android studio上该如何部署?
  • ¥20 docker里部署springboot项目,访问不到扬声器
  • ¥15 netty整合springboot之后自动重连失效
  • ¥15 悬赏!微信开发者工具报错,求帮改