I would like to know, how flow control works in a client-streaming gRPC service in Go.
Specifically, I am interested in knowing when will a call to stream.SendMsg()
function in the client-side block? According to the documentation :
SendMsg() blocks until :
- There is sufficient flow control to schedule m with the transport, or ...
So what is the specification for the flow control mechanism of the stream? For example, if the server-side code responsible for reading the messages from the stream, isn't reading the messages fast enough, at what point will calls to SendMsg() block?
Is there some kind of backpressure mechanism implemented for the server to tell the client that it is not ready to receive more data? In the meantime, where are all the messages that have been successfully sent before the backpressure signal, queued?