doucuan5365 2014-01-15 13:28
浏览 453
已采纳

有人可以告诉我golang中io.ReadFull和bytes.Buffer.ReadFrom的行为是什么

I have a problem when I implement a tcp c/s demo, I found it's weird when I use io.ReadFull(conn, aByteArr) or bytes.Buffer.ReadFrom(conn) on the server side, it seems that the server won't read the data in the connnection until the client quit, in other words, the server is stucked, but I can use the basic conn.Read(aBuffer) to read data. why the two methods are so strange?

because I want my server to handle data of arbitrary size, so I don't like to use the basic way, I mean conn.Read(), which must make a bytes slice with specified size first. Please help me.

i can give my code: client:

package main

import (
    "net"
    "fmt"
    "bufio"
    "time"
    "runtime"
)

func send(s string, ch chan string){
    conn, err := net.Dial("tcp", ":4000")
    if err != nil {
        fmt.Println(err)
    }   
    fmt.Fprintf(conn, s)
    fmt.Println("send: ", s)                                                                                                                                                                                      
    /*  
    s := "server run"
    conn.Write([]byte(s))
    */
    status, err := bufio.NewReader(conn).ReadString('
')
    if err != nil {
        fmt.Println("error: ", err)
    }   
    ch <- status 
}
func main(){
    runtime.GOMAXPROCS(runtime.NumCPU())
    fmt.Println("cpu: ", runtime.NumCPU())
    ch := make(chan string, 5)
    timeout := time.After(10 * time.Second)
    i := 0

    for{
        go send(fmt.Sprintf("%s%d", "client", i), ch) 
        i++ 
        select {
            case ret := <-ch:
                fmt.Println(ret)
            case <-timeout:
                fmt.Println("time out")
                return
        }   
    }   
}

server:

package main

import (
    "net"
    "log"
    "io"
    "fmt"
    "time"
    //"bytes"
)

func main(){
    // Listen on TCP port 2000 on all interfaces.
    l, err := net.Listen("tcp", ":4000")
    if err != nil {
        log.Fatal(err)
    }   
    defer l.Close()
    for {
        // Wait for a connection.
        conn, err := l.Accept()
        if err != nil {
            log.Fatal(err)
        }   
        // Handle the connection in a new goroutine.
        // The loop then returns to accepting, so that
        // multiple connections may be served concurrently.
        go func(c net.Conn) {
            fmt.Println(c.RemoteAddr())
            defer c.Close()
            // Echo all incoming data.
            /*  basic
            buf := make([]byte, 100)
            c.Read(buf)
            fmt.Println(string(buf))
            //io.Copy(c, c)
            c.Write(buf)
            // Shut down the connection.
            */

            /* use a ReadFrom
            var b bytes.Buffer                                                                                                                                                                                    
            b.ReadFrom(conn)

            fmt.Println("length: ", b.Len())
            c.Write(b.Bytes())
            */

            // use io.ReadAll
            byteArr := make([]byte, 100)

            n, err := io.ReadFull(c, byteArr)
            if err != nil {
                fmt.Println(err)
            }   
            fmt.Println(n, byteArr[:n], time.Now())
            n, _ = c.Write(byteArr[:n])
            fmt.Println("write: ", n, time.Now())
        }(conn)
    }
}   
  • 写回答

1条回答 默认 最新

  • dqnhfbc3738 2014-01-15 14:30
    关注

    First of all: You are never closing the made connection in your client. On each invocation of send you dail a new conn but never flush or close that connection. This seems pretty strange and might be the sole issue here (e.g. if some layer buffers your stuff up to a close or a flush).

    It seems you think there should be an easy way of "read all from" some connection or io.Reader. There isn't. If you are upset about this, you shouldn't be. You want to read "data of arbitrary size" but arbitrary size can mean 418 Peta bytes. This is a lot and might take some time. And I'll bet you do not have the computing power to handle such data sizes. Reading arbitrary size basically calls for reading it in chunks, and processing it in chunks, as you just cannot handle 418 Peta bytes.

    Reading in chunks is what io.Reader provides. It is clumsy. That is the reason a lot of protocols start of with the size of data: You read 6 bytes like " 1423", parse the integer number and know your message is 1432 bytes long. From there on you can use convenience functions provided by bufio.Scanner, bytes.Buffer, io.ReadFull and that like. And even those require EOFs and may fail.

    If your messages do not start with some length indication (or are fixed length :-) you will have to read until EOF. For this EOF to arrive you must close the sending side, otherwise the connection is still open and might decide to send more stuff sometime in the future.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 YoloV5 第三方库的版本对照问题
  • ¥15 请完成下列相关问题!
  • ¥15 drone 推送镜像时候 purge: true 推送完毕后没有删除对应的镜像,手动拷贝到服务器执行结果正确在样才能让指令自动执行成功删除对应镜像,如何解决?
  • ¥15 求daily translation(DT)偏差订正方法的代码
  • ¥15 js调用html页面需要隐藏某个按钮
  • ¥15 ads仿真结果在圆图上是怎么读数的
  • ¥20 Cotex M3的调试和程序执行方式是什么样的?
  • ¥20 java项目连接sqlserver时报ssl相关错误
  • ¥15 一道python难题3
  • ¥15 牛顿斯科特系数表表示