dstew32424 2015-09-03 08:27
浏览 46
已采纳

将几个[]字节连接在一起的最快方法是什么?

Right now I'm using the code below (as in BenchmarkEncoder()) and it's fast, but I'm wondering if there is a faster, more efficient way. I benchmark with GOMAXPROCS=1 and:

sudo -E nice -n -20 go test -bench . -benchmem -benchtime 3s

.

package blackbird

import (
    "testing"
    "encoding/hex"
    "log"
    "bytes"
    "encoding/json"
)

var (
    d1, d2, d3, d4, outBytes []byte
    toEncode [][]byte
)

func init() {
    var err interface{}
    d1, err = hex.DecodeString("6e5438fd9c3748868147d7a4f6d355dd")
    d2, err = hex.DecodeString("0740e2dfa4b049f2beeb29cc304bdb5f")
    d3, err = hex.DecodeString("ab6743272358467caff7d94c3cc58e8c")
    d4, err = hex.DecodeString("7411c080762a47f49e5183af12d87330e6d0df7dd63a44808db4e250cdea0a36182fce4a309842e49f4202eb90184dd5b621d67db4a04940a29e981a5aea59be")
    if err != nil {
        log.Fatal("hex decoding failed: %v", err)
    }
    toEncode = [][]byte{d1, d2, d3, d4}

}

func Encode(stuff [][]byte) []byte {
    return bytes.Join(stuff, nil)
}

func BenchmarkEncoderDirect(b *testing.B) {
    for i := 0; i < b.N; i++ {
        bytes.Join(toEncode, nil)
    }
}

func BenchmarkEncoder(b *testing.B) {
    for i := 0; i < b.N; i++ {
        Encode(toEncode)
    }
}

func BenchmarkJsonEncoder(b *testing.B) {
    for i := 0; i < b.N; i++ {
        outBytes, _ = json.Marshal(toEncode)

    }
}

What is the fastest way to concatenate several []byte together?

  • 写回答

2条回答 默认 最新

  • dongyi2006 2015-09-03 09:07
    关注

    bytes.Join() is pretty fast, but it does some extra work appending separators between the appendable byte slices. It does so even if the separator is an empty or nil slice.

    So if you care about the best performance (although it will be a slight improvement), you may do what bytes.Join() does without appending (empty) separators: allocate a big-enough byte slice, and copy each slice into the result using the built-in copy() function.

    Try it on the <kbd>Go Playground</kbd>:

    func Join(s ...[]byte) []byte {
        n := 0
        for _, v := range s {
            n += len(v)
        }
    
        b, i := make([]byte, n), 0
        for _, v := range s {
            i += copy(b[i:], v)
        }
        return b
    }
    

    Using it:

    concatenated := Join(d1, d2, d3, d4)
    

    Improvements:

    If you know the total size in advance (or you can calculate it faster than looping over the slices), provide it and you can avoid having to loop over the slices in order to count the needed size:

    func JoinSize(size int, s ...[]byte) []byte {
        b, i := make([]byte, size), 0
        for _, v := range s {
            i += copy(b[i:], v)
        }
        return b
    }
    

    Using it in your case:

    concatenated := JoinSize(48 + len(d4), d1, d2, d3, d4)
    

    Notes:

    But if your goal in the end is to write the concatenated byte slice into an io.Writer, performance wise it is better not to concatenate them but write each into it separately.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥15 自动转发微信群信息到另外一个微信群
  • ¥15 outlook无法配置成功
  • ¥30 这是哪个作者做的宝宝起名网站
  • ¥60 版本过低apk如何修改可以兼容新的安卓系统
  • ¥25 由IPR导致的DRIVER_POWER_STATE_FAILURE蓝屏
  • ¥50 有数据,怎么建立模型求影响全要素生产率的因素
  • ¥50 有数据,怎么用matlab求全要素生产率
  • ¥15 TI的insta-spin例程
  • ¥15 完成下列问题完成下列问题
  • ¥15 C#算法问题, 不知道怎么处理这个数据的转换