weixin_39881760
weixin_39881760
2021-01-12 11:59

Blocking Enqueue Methods (Feature Request)

Your blocking Concurrent Queue has wait and wait_for methods defined for dequeue. If there were analogous methods for enqueue in conditions where we don't want to allocate memory, that would be super useful for creating pipeline style code.

该提问来源于开源项目:cameron314/concurrentqueue

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

4条回答

  • weixin_39636987 weixin_39636987 4月前

    I agree. Not likely to happen in the near future, though. Since the queue is really a collection of independent sub-queues, based on blocks of elements and variable-sized indices, it's more tricky than just keeping a semaphore on "the number of free slots" since there isn't really such a thing.

    I think this would make more sense if I moved to a different design altogether (something a little simpler/more traditional).

    点赞 评论 复制链接分享
  • weixin_39881760 weixin_39881760 4月前

    I did simulate it using spinning. Perhaps all you need to do is use a condition variable instead of your full Semaphore that's just signalled on a dequeue?

    点赞 评论 复制链接分享
  • weixin_39636987 weixin_39636987 4月前

    The semaphore is very efficient. Implementing a similarly efficient condition variable would be a bit of a challenge.

    Anyway, it still wouldn't be enough, because the condition can't easily be checked in advance :-) The present try-then-fail semantics give a lot more flexibility to the algorithm implementations than a reserve-then-execute model, which would be ideal for this feature.

    Although I suppose it could all be simulated -- use a semaphore or two to simulate a poor man's condition variable, then try-fail and go back to waiting on failure. Non-trivial enough that it's still a ways away on the horizon, I'm afraid.

    点赞 评论 复制链接分享
  • weixin_39636987 weixin_39636987 4月前

    I'm afraid I'm no longer adding new features to the queue, so I'm going to close this issue.

    点赞 评论 复制链接分享

相关推荐