doutao6380 2017-10-17 11:56
浏览 66
已采纳

如何在http下载上实现不活动超时

I've been reading up on the various timeouts that are available on an http request and they all seem to act as hard deadlines on the total time of a request.

I am running an http download, I don't want to implement a hard timeout past the initial handshake as I don't know anything about my users connection and don't want to timeout on slow connections. What I would ideally like is to timeout after a period of inactivity (when nothing has been downloaded for x seconds). Is there any way to do this as a built in or do I have to interrupt based on stating the file?

The working code is a little hard to isolate but I think these are the relevant parts, there is another loop that stats the file to provide progress but I will need to refactor a bit to use this to interrupt the download:

// httspClientOnNetInterface returns an http client using the named network interface, (via proxy if passed)
func HttpsClientOnNetInterface(interfaceIP []byte, httpsProxy *Proxy) (*http.Client, error) {

    log.Printf("Got IP addr : %s
", string(interfaceIP))
    // create address for the dialer
    tcpAddr := &net.TCPAddr{
        IP: interfaceIP,
    }

    // create the dialer & transport
    netDialer := net.Dialer{
        LocalAddr: tcpAddr,
    }

    var proxyURL *url.URL
    var err error

    if httpsProxy != nil {
        proxyURL, err = url.Parse(httpsProxy.String())
        if err != nil {
            return nil, fmt.Errorf("Error parsing proxy connection string: %s", err)
        }
    }

    httpTransport := &http.Transport{
        Dial:  netDialer.Dial,
        Proxy: http.ProxyURL(proxyURL),
    }

    httpClient := &http.Client{
        Transport: httpTransport,
    }

    return httpClient, nil
}

/*
StartDownloadWithProgress will initiate a download from a remote url to a local file,
providing download progress information
*/
func StartDownloadWithProgress(interfaceIP []byte, httpsProxy *Proxy, srcURL, dstFilepath string) (*Download, error) {

    // start an http client on the selected net interface
    httpClient, err := HttpsClientOnNetInterface(interfaceIP, httpsProxy)
    if err != nil {
        return nil, err
    }

    // grab the header
    headResp, err := httpClient.Head(srcURL)
    if err != nil {
        log.Printf("error on head request (download size): %s", err)
        return nil, err
    }

    // pull out total size
    size, err := strconv.Atoi(headResp.Header.Get("Content-Length"))
    if err != nil {
        headResp.Body.Close()
        return nil, err
    }
    headResp.Body.Close()

    errChan := make(chan error)
    doneChan := make(chan struct{})

    // spawn the download process
    go func(httpClient *http.Client, srcURL, dstFilepath string, errChan chan error, doneChan chan struct{}) {
        resp, err := httpClient.Get(srcURL)
        if err != nil {
            errChan <- err
            return
        }
        defer resp.Body.Close()

        // create the file
        outFile, err := os.Create(dstFilepath)
        if err != nil {
            errChan <- err
            return
        }
        defer outFile.Close()

        log.Println("starting copy")
        // copy to file as the response arrives
        _, err = io.Copy(outFile, resp.Body)

        // return err
        if err != nil {
            log.Printf("
 Download Copy Error: %s 
", err.Error())
            errChan <- err
            return
        }

        doneChan <- struct{}{}

        return
    }(httpClient, srcURL, dstFilepath, errChan, doneChan)

    // return Download
    return (&Download{
        updateFrequency: time.Microsecond * 500,
        total:           size,
        errRecieve:      errChan,
        doneRecieve:     doneChan,
        filepath:        dstFilepath,
    }).Start(), nil
}

Update Thanks to everyone who had input into this.

I've accepted JimB's answer as it seems like a perfectly viable approach that is more generalised than the solution I chose (and probably more useful to anyone who finds their way here).

In my case I already had a loop monitoring the file size so I threw a named error when this did not change for x seconds. It was much easier for me to pick up on the named error through my existing error handling and retry the download from there.

I probably crash at least one goroutine in the background with my approach (I may fix this later with some signalling) but as this is a short running application (its an installer) so this is acceptable (at least tolerable)

  • 写回答

1条回答 默认 最新

  • dongposhi8677 2017-10-18 13:25
    关注

    Doing the copy manually is not particularly difficult. If you're unsure how to properly implement it, it's only a couple dozen lines from the io package to copy and modify to suit your needs (I only removed the ErrShortWrite clause, because we can assume that the std library io.Writer implementations are correct)

    Here is a copy work-alike function, that also takes a cancelation context and an idle timeout parameter. Every time there is a successful read, it signals to the cancelation goroutine to continue and start a new timer.

    func idleTimeoutCopy(dst io.Writer, src io.Reader, timeout time.Duration,
        ctx context.Context, cancel context.CancelFunc) (written int64, err error) { 
        read := make(chan int)
        go func() {
            for {
                select {
                case <-ctx.Done():
                    return
                case <-time.After(timeout):
                    cancel()
                case <-read:
                }
            }
        }()
    
        buf := make([]byte, 32*1024)
        for {
            nr, er := src.Read(buf)
            if nr > 0 {
                read <- nr
                nw, ew := dst.Write(buf[0:nr])
                written += int64(nw)
                if ew != nil {
                    err = ew
                    break
                }
            }
            if er != nil {
                if er != io.EOF {
                    err = er
                }
                break
            }
        }
        return written, err
    }
    

    While I used time.After for brevity, it's more efficient to reuse the Timer. This means taking care to use the correct reset pattern, as the return value of the Reset function is broken:

        t := time.NewTimer(timeout)
        for {
            select {
            case <-ctx.Done():
                return
            case <-t.C:
                cancel()
            case <-read:
                if !t.Stop() {
                    <-t.C
                }
                t.Reset(timeout)
            }
        }
    

    You could skip calling Stop altogether here, since in my opinion if the timer fires while calling Reset, it was close enough to cancel anyway, but it's often good to have the code be idiomatic in case this code is extended in the future.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 微信公众号自制会员卡没有收款渠道啊
  • ¥15 stable diffusion
  • ¥100 Jenkins自动化部署—悬赏100元
  • ¥15 关于#python#的问题:求帮写python代码
  • ¥20 MATLAB画图图形出现上下震荡的线条
  • ¥15 关于#windows#的问题:怎么用WIN 11系统的电脑 克隆WIN NT3.51-4.0系统的硬盘
  • ¥15 perl MISA分析p3_in脚本出错
  • ¥15 k8s部署jupyterlab,jupyterlab保存不了文件
  • ¥15 ubuntu虚拟机打包apk错误
  • ¥199 rust编程架构设计的方案 有偿