doutou7286 2016-02-18 02:07
浏览 29
已采纳

Go是否会阻塞诸如Node之类的处理器密集型操作?

Go is great for concurrency, tasks are passed as go-routines, and go-routines are processed in a virtual processor, whenever one go-routine experience a blocking operation (a database call), the go-routine is moved to another virtual processor on a different thread, which is most of the time run on another physical processor ... now, we achieve parallelism.

Node.js has a similar technique (except everything happens on the same thread), but putting all waiting virtual processes in a waiting queue, till they receive responses from the blocking resource (DB, URL), then they are send to be processed.

The down side of Node.js, is its inability to handle processor intensive operations (for loop is an example), and the virtual process being in action, will take all the time till finish without preemption, that's why Node.js is regarded wisely before it's used in critical systems, despite it's high concurrency availability.

Yes, Go spawns a new thread to handle the blocking go-routine, but how about the processor intensive operations, are they regarded the same, or suffer from Node problems?

  • 写回答

2条回答 默认 最新

  • dtyw10299 2016-02-18 02:45
    关注

    Yes, but it's much more difficult to run into in practice than with Node, and much easier to recover from. Node is single-threaded unless you write explicitly multiprocess code (which isn't always easy, especially if you want to be portable). Go uses N:M scheduling with a certain maximum number of running threads (equal to the number of logical CPUs by default, but tunable). Note running: a goroutine that is waiting for a blocking operation is "frozen" and doesn't count as occupying a running thread.

    So, if you have a single goroutine doing something CPU-intensive, it generally won't have an impact on other goroutines' ability to run, because there are a bunch of other threads available to run them. If all of your goroutines are occupied with computation, then it's true that other ones won't get a chance to run until one gives up the CPU. This might not necessarily be a problem if they're actually getting work done, since all of your CPUs are doing actual work, but of course sometimes it can be in latency-sensitive situations.

    If it is a problem, three solutions come to mind:

    1. Use runtime.Gosched during long computations to yield control of the processor, allowing other goroutines a chance to run. No other change has to be made; it's just a way of working with the cooperative scheduler. Gosched might return immediately, or it might return later.

    2. Use a worker pool to limit the amount of parallel CPU-intensive work to less than GOMAXPROCS. Go makes this pretty easy.

    3. Flipside of the same coin: raise GOMAXPROCS above the expected number of parallel compute tasks. This is probably the worst of the ideas, and will hurt scheduling at least somewhat, but it will still work, and ensure you have threads available for handling events.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥15 装 pytorch 的时候出了好多问题,遇到这种情况怎么处理?
  • ¥20 IOS游览器某宝手机网页版自动立即购买JavaScript脚本
  • ¥15 手机接入宽带网线,如何释放宽带全部速度
  • ¥30 关于#r语言#的问题:如何对R语言中mfgarch包中构建的garch-midas模型进行样本内长期波动率预测和样本外长期波动率预测
  • ¥15 ETLCloud 处理json多层级问题
  • ¥15 matlab中使用gurobi时报错
  • ¥15 这个主板怎么能扩出一两个sata口
  • ¥15 不是,这到底错哪儿了😭
  • ¥15 2020长安杯与连接网探
  • ¥15 关于#matlab#的问题:在模糊控制器中选出线路信息,在simulink中根据线路信息生成速度时间目标曲线(初速度为20m/s,15秒后减为0的速度时间图像)我想问线路信息是什么