doudao1837
doudao1837
2015-02-21 20:04

具有多个与单个共享结构进行通信的通道是否是线程安全的?

已采纳

Consider the following code:

type Cache struct{
    cache map[string]*http.Response
    AddChannel chan *http.Response
    RemoveChannel chan *http.Response
    FindChannel chan string
}

func (self *Cache) Run(){
    select{
        case resp := <-self.AddChannel:
        //..code
        case resp := <- self.RemoveChannel:
        //..code
        case find := <- self.FindChannel:
        //..code
    }
}

In this code, a cache is created and the Run function is called on a separate goroutine.

If a response is to be cached, it is sent through the cache's AddChannel;

if a response is to be removed, it is sent through the RemoveChannel

and if a response needs to be found, the appropriate key is sent through the FindChannel.

Is this a thread-safe way of protecting the cache against race conditions or is it possible that, for example, the same response could be sent to both the AddChannel and RemoveChannel leading to cache corruption.

I have read Go's memory model documentation and understand that it is guaranteed that sending a variable through a channel is guaranteed to happen before receiving but i'm somewhat confused as to whether this still holds if there are multiple channels to communicate to a single instance.

Sorry if I worded the question badly and thanks for your help.

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

2条回答

  • dsbo44836129 dsbo44836129 6年前

    In principle the usage of channels is a valid way to ensure synchronous access to the struct data. The problem I see with your approach is that your Run function does only a single read and then returns. As long as you call Run from the same goroutine every time, it might work but there's an easier way.

    The memory safety can be guaranteed only of all struct access is confined to one, and only one, goroutine. The way I usually do that is to create a polling routine which loops on the channels. Either indefinitely, or until it is explicitly stopped.

    Here is an example. I create separate channels for each supported operation, mostly to make it clearer what is going on. You can easily use a single channel like chan interface{}, and switch on the type of the message received to see what kind of operation you should be performing. This kind of setup is very loosely based on Erlang's message passing concepts. It requires a fair amount of boilerplate to setup, but eliminates the need for mutex locks. Whether it is efficient and scaleable is something you can only discover through testing. Note also that it packs a fair amount of allocation overhead.

    package main
    
    import "fmt"
    
    func main() {
        t := NewT()
        defer t.Close()
    
        t.Set("foo", 123)
        fmt.Println(t.Get("foo"))
    
        t.Set("foo", 321)
        fmt.Println(t.Get("foo"))
    
        t.Set("bar", 456)
        fmt.Println(t.Get("bar"))
    }
    
    type T struct {
        get  chan getRequest
        set  chan setRequest
        quit chan struct{}
    
        data map[string]int
    }
    
    func NewT() *T {
        t := &T{
            data: make(map[string]int),
            get:  make(chan getRequest),
            set:  make(chan setRequest),
            quit: make(chan struct{}, 1),
        }
    
        // Fire up the poll routine.
        go t.poll()
        return t
    }
    
    func (t *T) Get(key string) int {
        ret := make(chan int, 1)
        t.get <- getRequest{
            Key:   key,
            Value: ret,
        }
        return <-ret
    }
    
    func (t *T) Set(key string, value int) {
        t.set <- setRequest{
            Key:   key,
            Value: value,
        }
    }
    
    func (t *T) Close() { t.quit <- struct{}{} }
    
    // poll loops indefinitely and reads from T's channels to do
    // whatever is necessary. Keeping it all in this single routine,
    // ensures all struct modifications are preformed atomically.
    func (t *T) poll() {
        for {
            select {
            case <-t.quit:
                return
    
            case req := <-t.get:
                req.Value <- t.data[req.Key]
    
            case req := <-t.set:
                t.data[req.Key] = req.Value
            }
        }
    }
    
    type getRequest struct {
        Key   string
        Value chan int
    }
    
    type setRequest struct {
        Key   string
        Value int
    }
    
    点赞 评论 复制链接分享
  • douqiangbei50208 douqiangbei50208 6年前

    Yes the select will only ever either be waiting or executing one case block. So if you only have one Run function at any time and you know no other goroutines will mutate the cache, then it will be race free.

    I assume you wanted a infinite loop round the select.

    heres an example where you can see the select does not enter another block whilst one is executing... https://play.golang.org/p/zFeRPK1h8c

    btw, 'self' is frowned upon as a receiver name.

    点赞 评论 复制链接分享