dptrmt4366 2014-05-14 20:03
浏览 13
已采纳

鼓浪游分布格局

According to this article, the app-engine front-end and the playground back-end communicate through RPC calls. Each one of app-engine front-end instance and playground instance can be created to support scaling.

Playground Infrastructure Overview

I am asking myself what is/are the patterns (solutions) to load balance works between front-end request and back-end instance while keeping RPC.

One solution may be to use one global working queue where tasks are puts inside it with a 'Reply-To' header. This header should point to a per front-end instance queue where responses are put. Something like the following schema (from RabbitMQ tutorial) with rpc_queue shared between back-end instances : RabbitMQ RPC format

I am not sure this would be a good way to do especially the fact that if the shared queue is offline, the whole system fail (but how to take care of this?).

Thank you.

  • 写回答

1条回答 默认 最新

  • dongxi7704 2014-05-19 21:02
    关注

    As an answer and a follow-up of comments I received on the first post, I developed Indenter, a small proof of concept based on the idea proposed of a service discovery daemon (I use etcd instead of ZooKeepr for simplicity however).

    I wrote an article about it and release the code if someone may be interested one day:

    Indenter: a scalable, fault-tolerant, distributed web service copying the go playground architecture.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?