I am working on a project splitting services off of a monolithic codebase. Currently, the two services I am working on are a login/authentication service, a service providing a few REST API endpoints and a data retrieval service feeding the REST API. I need the services to exchange data with each other, in particular, data retrieval needs to send data to the REST API and REST API needs to exchange authentication requests/responses with the login service.
The technology of choice in go world seems to be protobufs and gRPC. I have been looking into using these, but it seems really inconvenient, so I wonder if I am just doing things wrong.
For example, my data retrieval gets records from an RDBMS and my REST API serves this data as JSON. Normally, I would define a "model" struct for each type of record in the database/API, use reflect annotations for both database and JSON in the struct definitions and used something like sqlx to scan query results into structs and encoding/json to serialize structs into JSON. But if I want to pass the data around using gRPC and protobufs this whole setup goes out of the window. Since protobuf generates its own struct types, I would have to manually implement conversion from SQL rows into protobufs and from protobufs into JSON for every single message type I define. Not that implementing conversions is hard, but it introduces more opportunities for bugs and code fragility and feels like reinventing the wheel.
And this seems like this should be a very common problem. Am I missing some obvious solution?