2016-07-10 20:07
浏览 78


I am using PHP and ratchet, I am dealing with how I can create an event from the server side and push it to the clients.

I know I can use ZeroMQ, but this will also create an overhead, each time an event has to be notified, a new socket connection is made.

So I was wondering, wouldn't that be better to have a thread which is always selecting from a MySQL table named "SocketQueue" using memory as engine?

Does this architecture make the app lose the attribute of "Realtime"?

图片转代码服务由CSDN问答提供 功能建议

我正在使用 PHP ratchet ,我正在处理如何从服务器端创建事件并将其推送到客户端。

我知道我可以使用 ZeroMQ ,但这也会产生开销,每次必须通知事件时,都会建立一个新的套接字连接。

所以 我想知道,使用内存作为引擎,总是选择一个名为“ SocketQueue ”的 MySQL 表的线程会不会更好? \ n


  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 邀请回答

1条回答 默认 最新

  • dongluo8303 2016-07-11 16:18

    Lets first skip the marketing lingo
    and focus on rigorous system design

    While one can read a lot of tags like real-time, ultra-fast, low-latency, these do not represent the same and have much different level of importance in rigorous system design decisions.


    Yes, speed is a nice to have feature, but not always could be achieved at a reasonable cost or uniformly guaranteed including worst-case situations in production eco-system. So being "fast" helps, but is not a cornerstone in a real-time system design.


    The same applies here, latency is a principal cost one has to pay on transaction under review. Again, it is fine, if the latency scale is not a-priori devastating the system design, but once it is feasible in principle, the real-time system design is focused rather on latency jitter, not on a latency value per se, as we have to live with the reality and the system has to be proved to robustly handle all the various latency levels as an indivisible part of the production eco-system reality.


    Well, a system may be called a real-time system, if it has such a property, that the design has been cross-validated by a rigorous a-priori proof that under all possible circumstances, the real-time execution scheduling has robustly met a given internal time-horizon, within which the system has a positive proof of it's ability to "be always in time" for all of it's internal processing sub-tasks.

    In case
    any item
    from this semidefinition above
    is not present - be it:
    - R/T-design internal scheduling time horizon
    - R/T-design validation
    - R/T-design positive proof of it's robustness in "being always in time" under conditions above
    the efforts fail to be rigorously real-time
    and cannot hold an attribute of "Real-Time"

    A carefull reader has already noticed, that the rules of real-time-ness did not say anything about how long that internal scheduling time-horizon is about to last.

    Yes, real-time system can have it's abilities designed and positively proved to robustly meet all sub-tasks within a horizon of 1 [us] or 1 [ms] or 1[s] or 1 [min] or 1 [hour], whereas in all cases the real-time execution is considered as a rigorous real-time system.

    Do Real-time systems, having about an hour long scheduling, have still some reason? Sure they have and are quite a common design target, just imagine an Earth / Deep-Space Satellite radio-uplink and ground control station operations coordinated across a link, that takes a few hours to just get any telemetry/control data there and back.

    Yes, not all real world processes can live with such long scheduling horizon, and need to have this threshold on a safe distance from a Nyquist stability criterion, because the controlled process imposes some additional conditions for stable ( i.e. robust ) control.

    At the same time, going to the far end of the opposite end of the real-time scheduling spectrum, too tight scheduling time-horizons need not save the Project as design has to fit into all real-hardware material constraints ( no one can send signals faster than a speed of light ( and forget about Quantum Entaglement, even this will not save you in this ), power-consumption limitations and still be economically feasible, including the costs of system-design, programming and validation.

    Just-Enough-Scaling is a must

    Back to the sketched components.

    ZeroMQ ought be considered as a rather persistent communication + signalling layer. My distributed system applications use ZeroMQ and benefit from this approach and provide reasonable services, under a due care, down to some tens of [us] scheduling framing.

    Any attempt to setup a ZeroMQ socket ad-hoc is a no-go idea, as the setup overheads are not in line with real-time design intentions. This is an imperative engineering practice to setup these a-priori, once the system is being started and perform the self-diagnosing tasks, before the whole eco-system could be declared as RTO in an R/T mode.

    Your [messaging+signalling]-layer design does not need to have an external dependency on any MySQL table to decide on proper routing of the messages / signal distribution. May need to spend some longer time with the ZeroMQ internalities, but this is one of the strongest powers of the ZeroMQ specialisation.

    For the sake of a real-time mode of operations, ZeroMQ does not depend on any Message-Broker entity, which would make your real-time designs another level of nightmare, as your design cannot have any tools/controls on this (btw. a core)-element of the messaging/signalling layer, while your sub-tasks are all heavily dependent on the smooth, just-enough-long latency of Broker-mediated message/signal delivery.

    MySQL engine will have the biggest variance in "being always in time". Vast engineering efforts are to be expected in providing a reasonable set of non-traditional programming approaches so as to bear the responsibility to positively proof the robustness of sub-task scheduling.

    点赞 评论

相关推荐 更多相似问题