2013-01-31 09:31
浏览 46


var buf bytes.Buffer

var outputBuffer [100]byte
b := []byte(`{"Name":"Wednesday","Age":6,"Parents":["Gomez","Morticia"],"test":{"prop1":1,"prop2":[1,2,3]}}`)

w := zlib.NewWriter(&buf)
r, _ := zlib.NewReader(&buf)
r.Read(outputBuffer)//cannot use outputBuffer (type [100]byte) as type []byte in function argument

what can I do to make this right? thanks

  • 写回答
  • 好问题 提建议
  • 追加酬金
  • 关注问题
  • 邀请回答

2条回答 默认 最新

  • drnmslpz42661 2013-01-31 15:36

    well you tried to use an array as a slice. It expected a []byte and you gave it a [100]byte. A []byte has a dynamic width, while a [100]byte is always 100 bytes. An array's size is a part of its type; a [1]int is a different type from a [2]int. That's why almost everything operates on slices.

    But that's not the only thing. When you call Read on an io.Reader directly, it fills in the target slice up to its current width, without expanding it. If you had made your output slice 10 bytes wide (make([]byte, 10)), the output you would see would be {"Name":"W.

    var in bytes.Buffer
    b := []byte(`{"Name":"Wednesday","Age":6,"Parents":["Gomez","Morticia"],"test":{"prop1":1,"prop2":[1,2,3]}}`)
    w := zlib.NewWriter(&in)
    var out bytes.Buffer
    r, _ := zlib.NewReader(&in)
    io.Copy(&out, r)

    but at this point, you might as well just pass os.Stdout into io.Copy, just like they do in the standard library docs. The only difference is we have kept a copy of the output format, but... what if the output is so large that you don't want to hold it in memory? That's why io.Copy takes an interface: you can take compressed data, and write an uncompressed version of it directly to any output stream, including stdout but also including things like files, unix sockets, or network sockets.

    解决 无用
    打赏 举报

相关推荐 更多相似问题