duanlumei5941
2019-02-21 12:20
浏览 141
已采纳

Golang:在waitGroup.Wait()上出现“致命错误:所有goroutine都在睡眠-死锁”

I am trying to write a code which does concurrent reads on a file and posts contents to one channel.

Here is the link to my code, and the code:

func main() {
    bufferSize := int64(10)
    f, err := os.Open("tags-c.csv")
    if err != nil {
        panic(err)
    }
    fileinfo, err := f.Stat()
    if err != nil {
        fmt.Println(err)
        return
    }
    filesize := int64(fileinfo.Size())
    fmt.Println(filesize)
    routines := filesize / bufferSize
    if remainder := filesize % bufferSize; remainder != 0 {
        routines++
    }
    fmt.Println("Total routines : ", routines)

    channel := make(chan string, 10)
    wg := &sync.WaitGroup{}

    for i := int64(0); i < int64(routines); i++ {
        wg.Add(1)
        go read(i*bufferSize, f, channel, bufferSize, filesize, wg)

    }
    fmt.Println("waiting")
    wg.Wait()
    fmt.Println("wait over")
    close(channel)

    readChannel(channel)
}

func readChannel(channel chan string) {
    for {
        data, more := <-channel
        if more == false {
            break
        }
        fmt.Print(data)
    }
}

func read(seek int64, file *os.File, channel chan string, bufferSize int64, filesize int64, wg *sync.WaitGroup) {
    defer wg.Done()
    fmt.Println("read :: ", seek)
    var buf []byte
    if filesize < bufferSize {
        buf = make([]byte, filesize)
    } else if (filesize - seek) < bufferSize {
        buf = make([]byte, filesize-seek)
    } else {
        buf = make([]byte, bufferSize)
    }

    n, err := file.ReadAt(buf, seek)
    if err != nil {
        log.Printf("loc %d err: %v", seek, err)
        return
    }
    if n > 0 {
        channel <- string(buf[:n])
        fmt.Println("ret :: ", seek)
    }
}

I tried checking online but to my amaze I had already taken care of the solutions mentioned. Any help will be appreciated.

  • 写回答
  • 关注问题
  • 收藏
  • 邀请回答

1条回答 默认 最新

  • duandu9260 2019-02-21 12:34
    已采纳

    The problem is that you want all your started reader goroutines to finish before you would go ahead and drain the channel on which they deliver the results.

    And the channel is buffered, which can hold up to 10 elements. Once 10 goroutines send a message on it, the rest will be blocked, thus they will never complete (as reading from this channel could only start after they all return: this is the deadlock).

    So instead you should launch another goroutine to receive the results concurrently with the reader goroutines:

    done := make(chan struct{})
    go readChannel(channel, done)
    
    fmt.Println("waiting")
    wg.Wait()
    fmt.Println("wait over")
    close(channel)
    
    // Wait for completion of collecting the results:
    <-done
    

    Where reading the channel should be a for range (which terminates when the channel is closed and all values have been received from it that were sent on it before it was closed):

    func readChannel(channel chan string, done chan struct{}) {
        for data := range channel {
            fmt.Print(data)
        }
        close(done)
    }
    

    Note that I used a done channel so the main goroutine will also wait for the goroutine receiving the results to finish.

    Also note that since in most cases the disk IO is the bottleneck and not the CPU, and since delivering and receiving the results from multiple goroutines also have some overhead, it may very well be that you won't see any improvements reading the file concurrently from multiple goroutines.

    已采纳该答案
    打赏 评论

相关推荐 更多相似问题