dongshou1991 2015-02-27 21:53
浏览 28
已采纳

Google数据存储-每个实体组每秒看不到1次写入限制

I've read a lot about strong vs eventual consistency, using ancestor / entity groups, and the 1 write per second per entity group limitation of Google Datastore.

However, in my testing I have never hit the exception Too much contention on these datastore entities. please try again. and am trying to understand whether I'm misunderstanding these concepts or missing a piece of the puzzle.

I'm creating entities like so:

func usersKey(c appengine.Context) *datastore.Key {
    return datastore.NewKey(c, "User", "default_users", 0, nil)
}

func (a *UserDS) UserCreateOrUpdate(c appengine.Context, user models.User) error {

    key := datastore.NewKey(c, "User", user.UserId, 0, usersKey(c))
    _, err := datastore.Put(c, key, &user)

    return err
}

And then reading them with datastore.Get. I know I won't have issues reading since I'm doing a lookup by key, but if I have a high volume of users creating and updating their information, I would theoretically hit the max of 1 write per second constantly.

To test this, I attempted to create 25 users at once (using the above methods, no batching), yet I don't log any exceptions, which this post implies I should: Google App Engine HRD - what if I exceed the 1 write per second limit for writing to the entity group?

What am I missing? Does the contention only apply to querying, is 25 not a high enough volume, or am I missing something else entirely?

  • 写回答

3条回答 默认 最新

  • dongyanpai2701 2015-02-27 22:04
    关注

    The limitation is per entity group, that means you could create as many users as you need without limitation (that's where scaling shines), as long as they don't share the same ancestor.

    Things change once you start using the user key as the ancestor of other entities, making them part of the same group and thus having a limit on how many changes you can make to it per second.

    Btw this is a generalization, most likely you will be able to make ~5 changes per second, this limitation exist because of the transactional properties of an entity group, so there's some kind of table with changes that must be executed sequentially, so you have to lock, and thus there's limited throughput.

    Still, rule of thumb is thinking you can only do 1 per second to force yourself think about how to work under this conditions.

    And like mentioned, this is only relevant when you update the database, gets and queries should scale as needed.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(2条)

报告相同问题?

悬赏问题

  • ¥20 西门子S7-Graph,S7-300,梯形图
  • ¥50 用易语言http 访问不了网页
  • ¥50 safari浏览器fetch提交数据后数据丢失问题
  • ¥15 matlab不知道怎么改,求解答!!
  • ¥15 永磁直线电机的电流环pi调不出来
  • ¥15 用stata实现聚类的代码
  • ¥15 请问paddlehub能支持移动端开发吗?在Android studio上该如何部署?
  • ¥20 docker里部署springboot项目,访问不到扬声器
  • ¥15 netty整合springboot之后自动重连失效
  • ¥15 悬赏!微信开发者工具报错,求帮改