douguanya6399 2017-09-11 03:49
浏览 51
已采纳

“围棋之旅”网络爬虫练习中的频道说明

I'm going through 'A Tour of Go' and have been editing most of the lessons to make sure I fully understand them. I have a question regarding an answer provided to the following exercise: https://tour.golang.org/concurrency/10 which can be found here: https://github.com/golang/tour/blob/master/solutions/webcrawler.go

I have a question regarding the following section:

done := make(chan bool)
for i, u := range urls {
    fmt.Printf("-> Crawling child %v/%v of %v : %v.
", i, len(urls), url, u)
    go func(url string) {
        Crawl(url, depth-1, fetcher)
        done <- true
    }(u)
}
for i, u := range urls {
    fmt.Printf("<- [%v] %v/%v Waiting for child %v.
", url, i, len(urls), u)
    <-done
}
fmt.Printf("<- Done with %v
", url)

What purpose does adding and removing true from the the channel done and running the two separate for loops have? Is it just to block until the go routine finishes? I know this is an example exercise, but doesn't that kind of defeat the point of spinning out a new thread in the first place?

Why can't you just call go Crawl(url, depth-1, fetcher) without the 2nd for loop and the done channel? Is it because of the shared memory space for all the variables?

Thanks!

  • 写回答

1条回答 默认 最新

  • doujiu5464 2017-09-11 04:05
    关注

    The first for loop schedules multiple goroutines to run and is iterating over a slice of urls.

    The second loop blocks on each url, waiting until its corresponding Crawl() invocation has completed. All the Crawl()ers will run and do their work in parallel and block exiting until the main thread has a chance to receive a message on the done channel for each url.

    In my opinion, a better way to implement this is to use a sync.WaitGroup. This code could log the wrong thing depending on how long each Crawl() invocation takes unless fetcher locks.

    If you want to be sure of the url that finished Crawl()ing, you could change the type of the done channel to string and send the url instead of true upon a Crawl() completion. Then, we could receive the url in the second loop.

    Example:

    done := make(chan string)
    for _, u := range urls {
        fmt.Printf("-> Crawling %s
    ", u)
        go func(url string) {
            Crawl(url, depth-1, fetcher)
            done <- url
        }(u)
    }
    for range urls {
        fmt.Printf("<- Waiting for next child
    ")
        u := <-done
        fmt.Printf("  Done... %s
    ", u)
    }
    
    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 stata安慰剂检验作图但是真实值不出现在图上
  • ¥15 c程序不知道为什么得不到结果
  • ¥40 复杂的限制性的商函数处理
  • ¥15 程序不包含适用于入口点的静态Main方法
  • ¥15 素材场景中光线烘焙后灯光失效
  • ¥15 请教一下各位,为什么我这个没有实现模拟点击
  • ¥15 执行 virtuoso 命令后,界面没有,cadence 启动不起来
  • ¥50 comfyui下连接animatediff节点生成视频质量非常差的原因
  • ¥20 有关区间dp的问题求解
  • ¥15 多电路系统共用电源的串扰问题