dsdtszi0520538
2019-08-06 11:46
浏览 917
已采纳

gRPC客户端流式流控制如何进行?

I would like to know, how flow control works in a client-streaming gRPC service in Go.

Specifically, I am interested in knowing when will a call to stream.SendMsg() function in the client-side block? According to the documentation :

SendMsg() blocks until :

  • There is sufficient flow control to schedule m with the transport, or ...

So what is the specification for the flow control mechanism of the stream? For example, if the server-side code responsible for reading the messages from the stream, isn't reading the messages fast enough, at what point will calls to SendMsg() block?

Is there some kind of backpressure mechanism implemented for the server to tell the client that it is not ready to receive more data? In the meantime, where are all the messages that have been successfully sent before the backpressure signal, queued?

图片转代码服务由CSDN问答提供 功能建议

我想知道,Go中的客户端流gRPC服务中的流控制是如何工作的。

特别是,我想知道何时在客户端块中调用 stream.SendMsg()函数? 根据文档 \ n

SendMsg()一直阻塞,直到:

  • 有足够的流量控制来调度与传输有关的m,或... < / li>

    那么流的流控制机制的规范是什么? 例如,如果负责从流中读取消息的服务器端代码读取消息的速度不够快,那么在什么时候调用SendMsg()会阻塞?

    服务器是否实现了某种背压机制,以告知客户端它尚未准备好接收更多数据? 同时,在反压信号之前已成功发送的所有消息都在哪里排队?

  • 写回答
  • 关注问题
  • 收藏
  • 邀请回答

1条回答 默认 最新

  • douyan8027 2019-08-07 20:19
    已采纳

    gRPC flow control is based on http2 flow control: https://http2.github.io/http2-spec/#FlowControl

    There will be backpressure. Messages are only successfully sent when there's enough flow control window for them, otherwise SendMsg() will block.

    The signal from the receiving side is not to add backpressure, it's to release backpressure. It's like saying "now I'm ready to receive another 1MB of messages, send them".

    已采纳该答案
    打赏 评论

相关推荐 更多相似问题