dongzizhi9903 2016-04-11 15:17
浏览 243
已采纳

在Golang中使用Google PubSub。 轮询服务的最有效(成本)方式

We're in the process of moving from AMQP to Google's Pubsub.

The docs suggest that pull might be the best choice for us since we're using compute engine and can't open our workers to receive via the push service.

It also says that pull might incur additional costs depending on usage:

If polling is used, high network usage may be incurred if you are opening connections frequently and closing them immediately.

We'd created a test subscriber in go that runs in a loop as so:

func main() {
    jsonKey, err := ioutil.ReadFile("pubsub-key.json")
    if err != nil {
        log.Fatal(err)
    }
    conf, err := google.JWTConfigFromJSON(
        jsonKey,
        pubsub.ScopeCloudPlatform,
        pubsub.ScopePubSub,
    )
    if err != nil {
        log.Fatal(err)
    }
    ctx := cloud.NewContext("xxx", conf.Client(oauth2.NoContext))

    msgIDs, err := pubsub.Publish(ctx, "topic1", &pubsub.Message{
        Data: []byte("hello world"),
    })

    if err != nil {
        log.Println(err)
    }

    log.Printf("Published a message with a message id: %s
", msgIDs[0])

    for {
        msgs, err := pubsub.Pull(ctx, "subscription1", 1)
        if err != nil {
            log.Println(err)
        }

        if len(msgs) > 0 {
            log.Printf("New message arrived: %v, len: %d
", msgs[0].ID, len(msgs))
            if err := pubsub.Ack(ctx, "subscription1", msgs[0].AckID); err != nil {
                log.Fatal(err)
            }
            log.Println("Acknowledged message")
            log.Printf("Message: %s", msgs[0].Data)
        }
    }
}

The question I have though is really whether this is the correct / recommended way to go about pulling messages.

We recieve about 100msg per second throughout the day. I'm not sure if running it in an endless loop is going to bankrupt us and can't find any other decent go examples.

  • 写回答

1条回答 默认 最新

  • dongping9475 2016-04-11 17:52
    关注

    In general, the key to pull subscribers in Cloud Pub/Sub is to make sure you always have at least a few outstanding Pull requests with max_messages set to a value that works well for:

    • the rate at which you publish messages,
    • the size of those messages, and
    • the rate of messages your subscriber can process messages.

    As soon as a pull request returns, you should issue another one. That means processing and acking the messages returned to you in the pull response asynchronously (or starting up the new pull request asynchronously). If you ever find that throughput or latency isn't what you expect, the first thing to do is add more concurrent pull requests.

    The statement "if polling is used, high network usage may be incurred if you are opening connections frequently and closing them immediately" applies if your publish rate is extremely low. Imagine you only publish two or three messages in a day, but you constantly poll with pull requests. Every one of those pull requests incurs a cost for making the request, but you won't get any messages to process except for the few times when you actually have a message, so the "cost per message" is fairly high. If you are publishing at a pretty steady rate and your pull requests are returning a non-zero number of messages, then the network usage and costs will be in line with the message rate.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥50 永磁型步进电机PID算法
  • ¥15 sqlite 附加(attach database)加密数据库时,返回26是什么原因呢?
  • ¥88 找成都本地经验丰富懂小程序开发的技术大咖
  • ¥15 如何处理复杂数据表格的除法运算
  • ¥15 如何用stc8h1k08的片子做485数据透传的功能?(关键词-串口)
  • ¥15 有兄弟姐妹会用word插图功能制作类似citespace的图片吗?
  • ¥200 uniapp长期运行卡死问题解决
  • ¥15 latex怎么处理论文引理引用参考文献
  • ¥15 请教:如何用postman调用本地虚拟机区块链接上的合约?
  • ¥15 为什么使用javacv转封装rtsp为rtmp时出现如下问题:[h264 @ 000000004faf7500]no frame?