drvvepadi289421028
2014-08-03 16:06
浏览 52
已采纳

Golang中的并行处理

Given the following code:

package main

import (
    "fmt"
    "math/rand"
    "time"
)

func main() {
    for i := 0; i < 3; i++ {
        go f(i)
    }

    // prevent main from exiting immediately
    var input string
    fmt.Scanln(&input)
}

func f(n int) {
    for i := 0; i < 10; i++ {
        dowork(n, i)
        amt := time.Duration(rand.Intn(250))
        time.Sleep(time.Millisecond * amt)
    }
}

func dowork(goroutine, loopindex int) {
    // simulate work
    time.Sleep(time.Second * time.Duration(5))
    fmt.Printf("gr[%d]: i=%d
", goroutine, loopindex)
}

Can i assume that the 'dowork' function will be executed in parallel?

Is this a correct way of achieving parallelism or is it better to use channels and separate 'dowork' workers for each goroutine?

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 邀请回答

4条回答 默认 最新

  • dongou1970 2014-08-03 16:26
    已采纳

    Your code will run concurrently, but not in parallel. You can make it run in parallel by setting GOMAXPROCS; see the article https://www.ardanlabs.com/blog/2014/01/concurrency-goroutines-and-gomaxprocs.html for a good summary.

    It's not clear exactly what you're trying to accomplish here, but it looks like a perfectly valid way of achieving concurrency to me.

    点赞 评论
  • douhan6738 2017-06-04 13:23

    f() will be executed concurrently but many dowork() will be executed sequentially within each f(). Waiting on stdin is also not the right way to ensure that your routines finished execution. You must spin up a channel that each f() pushes a true on when the f() finishes. At the end of the main() you must wait for n number of true's on the channel. n being the number of f() that you have spun up.

    点赞 评论
  • duanjihe5180 2017-06-07 03:32

    Regarding GOMAXPROCS, you can find this in Go 1.5's release docs:

    By default, Go programs run with GOMAXPROCS set to the number of cores available; in prior releases it defaulted to 1.

    Regarding preventing the main function from exiting immediately, you could leverage WaitGroup's Wait function.

    I wrote this utility function to help parallelize a group of functions:

    import "sync"
    
    // Parallelize parallelizes the function calls
    func Parallelize(functions ...func()) {
        var waitGroup sync.WaitGroup
        waitGroup.Add(len(functions))
    
        defer waitGroup.Wait()
    
        for _, function := range functions {
            go func(copy func()) {
                defer waitGroup.Done()
                copy()
            }(function)
        }
    }
    

    So in your case, we could do this

    func1 := func() {
        f(0)
    }
    
    func2 = func() {
        f(1)
    }
    
    func3 = func() {
        f(2)
    }
    
    Parallelize(func1, func2, func3)
    

    If you wanted to use the Parallelize function, you can find it here https://github.com/shomali11/util

    点赞 评论
  • drbz99867 2018-11-28 22:14

    This helped me when I was starting out.

        package main
    
        import "fmt"
    
        func put(number chan<- int, count int) {
            i := 0
            for ; i <= (5 * count); i++ {
                number <- i
            }
            number <- -1
        }
    
        func subs(number chan<- int) {
            i := 10
            for ; i <= 19; i++ {
                number <- i
            }
        }
    
        func main() {
            channel1 := make(chan int)
            channel2 := make(chan int)
            done := 0
            sum := 0
    
            go subs(channel2)
            go put(channel1, 1)
            go put(channel1, 2)
            go put(channel1, 3)
            go put(channel1, 4)
            go put(channel1, 5)
    
            for done != 5 {
                select {
                case elem := <-channel1:
                    if elem < 0 {
                        done++
                    } else {
                        sum += elem
                        fmt.Println(sum)
                    }
                case sub := <-channel2:
                    sum -= sub
                    fmt.Printf("atimta : %d
    ", sub)
                    fmt.Println(sum)
                }
            }
            close(channel1)
            close(channel2)
        }
    

    "Conventional cluster-based systems (such as supercomputers) employ parallel execution between processors using MPI. MPI is a communication interface between processes that execute in operating system instances on different processors; it doesn't support other process operations such as scheduling. (At the risk of complicating things further, because MPI processes are executed by operating systems, a single processor can run multiple MPI processes and/or a single MPI process can also execute multiple threads!)"

    点赞 评论

相关推荐 更多相似问题