dquoj04882 2013-09-24 12:27
浏览 94
已采纳

golang内存爆炸:newdefer

I have an program that listens on udp for log traffic, attempts to parse it, and then insert it into Redis. At a certain level of traffic, the memory seems to "explode" (Rapidly increase from a few hundred Megabytes to Gigabytes.

I've grabbed a heap profile shortly after this happens and it returns the following:

(pprof) top100 -cum
Total: 1731.3 MB
     0.0   0.0%   0.0%   1731.3 100.0% gosched0
  1162.5  67.1%  67.1%   1162.5  67.1% newdefer
     0.0   0.0%  67.1%   1162.5  67.1% runtime.deferproc
     0.0   0.0%  67.1%   1162.0  67.1% main.TryParse
     0.0   0.0%  67.1%    438.0  25.3% runtime.main
   301.5  17.4%  84.6%    437.5  25.3% main.main
   136.0   7.9%  92.4%    136.0   7.9% runtime.malg
     0.0   0.0%  92.4%    136.0   7.9% runtime.newproc
     0.0   0.0%  92.4%    136.0   7.9% runtime.newproc1
     1.5   0.1%  92.5%    131.3   7.6% main.RedisCuller
     0.0   0.0%  92.5%    108.5   6.3% github.com/garyburd/redigo/redis.(*conn).Do
     0.0   0.0%  92.5%    108.5   6.3% github.com/garyburd/redigo/redis.(*conn).readReply
     0.0   0.0%  92.5%    108.5   6.3% github.com/garyburd/redigo/redis.(*pooledConnection).Do
    95.8   5.5%  98.0%     95.8   5.5% cnew
     0.0   0.0%  98.0%     95.8   5.5% runtime.cnewarray
    34.0   2.0% 100.0%     34.0   2.0% runtime.convT2E
     0.0   0.0% 100.0%      0.5   0.0% main.init
     0.0   0.0% 100.0%      0.5   0.0% net/http/pprof.init
     0.0   0.0% 100.0%      0.5   0.0% sync.(*Once).Do
     0.0   0.0% 100.0%      0.5   0.0% syscall.Getenv
     0.0   0.0% 100.0%      0.5   0.0% time.init

When the program is "healthy" the profile looks more like:

(pprof) top20 -cum
Total: 186.7 MB
     0.0   0.0%   0.0%    186.7 100.0% gosched0
     0.5   0.3%   0.3%    122.7  65.7% main.RedisCuller
     0.0   0.0%   0.3%    103.5  55.4% github.com/garyburd/redigo/redis.(*pooledConnection).Do
     0.0   0.0%   0.3%    103.0  55.2% github.com/garyburd/redigo/redis.(*conn).Do
     0.0   0.0%   0.3%    103.0  55.2% github.com/garyburd/redigo/redis.(*conn).readReply
    88.2  47.2%  47.5%     88.2  47.2% cnew
     0.0   0.0%  47.5%     88.2  47.2% runtime.cnewarray
     0.0   0.0%  47.5%     57.0  30.5% main.TryParse
    57.0  30.5%  78.0%     57.0  30.5% newdefer
     0.0   0.0%  78.0%     57.0  30.5% runtime.deferproc
    34.0  18.2%  96.3%     34.0  18.2% runtime.convT2E
     1.5   0.8%  97.1%      6.5   3.5% main.main
     0.0   0.0%  97.1%      6.5   3.5% runtime.main
     5.0   2.7%  99.7%      5.0   2.7% runtime.malg
     0.0   0.0%  99.7%      5.0   2.7% runtime.newproc
     0.0   0.0%  99.7%      5.0   2.7% runtime.newproc1
     0.0   0.0%  99.7%      0.5   0.3% bufio.NewWriter
     0.0   0.0%  99.7%      0.5   0.3% bufio.NewWriterSize
     0.0   0.0%  99.7%      0.5   0.3% github.com/garyburd/redigo/redis.(*Pool).get
     0.0   0.0%  99.7%      0.5   0.3% github.com/garyburd/redigo/redis.(*pooledConnection).get

The only defer I have in my code is surrounding the parsing function (since it can frequently fail):

    for {
            rlen, _, err := sock.ReadFromUDP(buf[0:])
            checkError(err) 
            raw := logrow.RawRecord(string(buf[:rlen]))
            go TryParse(raw, c)
    }

    ...

    func TryParse(raw logrow.RawRecord, c chan logrow.Record) {
        defer func() {
                if r := recover(); r != nil {
                        //log.Printf("Failed Parse due to panic: %v", raw)
                        return
                }
        }()
        rec, ok := logrow.ParseRawRecord(raw)
        if !ok {
                return
                //log.Printf("Failed Parse: %v", raw)
        } else {
                c <- rec
        }
}

Does anyone see anything obvious I am doing wrong that could cause memory to suddenly balloon? Or maybe provide some direction in pinning it down?

Edit (More code around the logrow.Record Channel):

c := make(chan logrow.Record)
...
go RedisInserter(c, bucket, retention, pool)

func RedisInserter(c chan logrow.Record, bucket, retention int, p *redis.Pool) {
        for rec := range c {
                logrow.SendToRedis(rec, bucket, retention, p)
        }
}
  • 写回答

1条回答 默认 最新

  • dsvyc66464 2013-09-27 16:13
    关注

    Turned out to be a leak in closured deferred function (a problem in Go itself) as we see in:

        defer func() {
                if r := recover(); r != nil {
                        //log.Printf("Failed Parse due to panic: %v", raw)
                        return
                }
        }()
    

    So rebuilding with the latest version of Go resolved the problem ( Reference: https://codereview.appspot.com/10784043/ ). However, the smart thing to do is to of course design the ParseRecordFunction so it doesn't end up trying to slice out of bounds and cause a panic.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 乘性高斯噪声在深度学习网络中的应用
  • ¥15 运筹学排序问题中的在线排序
  • ¥15 关于docker部署flink集成hadoop的yarn,请教个问题 flink启动yarn-session.sh连不上hadoop,这个整了好几天一直不行,求帮忙看一下怎么解决
  • ¥30 求一段fortran代码用IVF编译运行的结果
  • ¥15 深度学习根据CNN网络模型,搭建BP模型并训练MNIST数据集
  • ¥15 C++ 头文件/宏冲突问题解决
  • ¥15 用comsol模拟大气湍流通过底部加热(温度不同)的腔体
  • ¥50 安卓adb backup备份子用户应用数据失败
  • ¥20 有人能用聚类分析帮我分析一下文本内容嘛
  • ¥30 python代码,帮调试,帮帮忙吧