dsa80833 2016-05-02 12:22
浏览 30
已采纳

事件处理中的死锁

So I have a channel used for event processing, the main server goroutine select on this channel and call event handlers on each of the event received:

evtCh := make(chan Event)
// server loop:
for !quit {
    select {
    case e := <- evtCh:
        handleEvent(e)
        break
    case quit := <-quitCh:
        //finish
}

// for send a new event to processing
func addEvent(e Event) {
    evtCh <- e
}

handleEvent will call registered handlers on the event type. I have func registerEventHandler(typ EventType, func(Event)) to handle the register. This program will supports user to write extensions, that means they can register their own handlers to handle events.

Now the problem arise when in user's event handler, they might send new event to the server by calling addEvent, this will cause the server to hang since the event handler itself is called in the context of the server's main loop(in the for loop).

How can I handle this situation elegantly? Is a queue modeled by slice a good idea?

  • 写回答

2条回答 默认 最新

  • dourao1896 2016-05-02 12:48
    关注

    this will cause the server to hang since the event handler itself is called in the context of the server's main loop

    The main loop should never block on calling handleEvent and the most common way to avoid that is to use a pool of worker goroutines. Here's a quick untested example:

    type Worker struct {
        id int
        ch chan Event
        quit chan bool
    }
    
    func (w *Worker) start {
        for {
            select {
                case e := <- w.ch:
                    fmt.Printf("Worker %d called
    ", w.id)
                    //handle event
                    break;
                case <- w.quit:
                    return
            }
        }
    }
    
    
    ch := make(chan Event, 100)
    quit := make(chan bool, 0)
    
    // Start workers
    for i:=0; i<10; i++{
        worker := &Worker{i,ch,quit}
        go worker.start()
    }
    
    // 
    func addEvent (e Event) {
        ch <- e
    }
    

    and when you are done, just close(quit) to kill all workers.

    EDIT: From the comments below:

    what is the main loop looks like in this case?

    Depends. If you have a fixed number of events, you can use a WaitGroup, like this:

    type Worker struct {
        id int
        ch chan Event
        quit chan bool
        wg *sync.WaitGroup
    }
    
    func (w *Worker) start {
        for {
            select {
                case e := <- w.ch:
                    //handle event
                    wg.Done()
    
                    break;
                case <- w.quit:
                    return
            }
        }
    }
    
    func main() {
        ch := make(chan Event, 100)
        quit := make(chan bool, 0)
    
        numberOfEvents := 100
    
        wg := &sync.WaitGroup{}
        wg.Add(numberOfEvents)
    
        // start workers
        for i:=0; i<10; i++{
            worker := &Worker{i,ch,quit,wg}
            go worker.start()
        }
    
    
        wg.Wait() // Blocks until all events are handled
    }
    

    If the number of events is not known beforehand, you can just block on the quit channel:

    <- quit
    

    and once another goroutine closes the channel, your program will terminate as well.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥15 多址通信方式的抗噪声性能和系统容量对比
  • ¥15 winform的chart曲线生成时有凸起
  • ¥15 msix packaging tool打包问题
  • ¥15 finalshell节点的搭建代码和那个端口代码教程
  • ¥15 Centos / PETSc / PETGEM
  • ¥15 centos7.9 IPv6端口telnet和端口监控问题
  • ¥20 完全没有学习过GAN,看了CSDN的一篇文章,里面有代码但是完全不知道如何操作
  • ¥15 使用ue5插件narrative时如何切换关卡也保存叙事任务记录
  • ¥20 海浪数据 南海地区海况数据,波浪数据
  • ¥20 软件测试决策法疑问求解答