2018-07-31 09:30
浏览 157


I am in the process of writing a simple deployment tool which needs to take tar files from s3, extract them and then upload them to our staging server. I would like to do this without storing the files locally, keeping them in memory.

Here is my code to download files from s3

func s3downloadFile(downloader *s3manager.Downloader, item string) {
    localitem := strings.Replace(item, s3base, "", -1)
    os.MkdirAll(path.Dir(localitem), os.ModePerm)
    file, err := os.Create(localitem)
    if err != nil {
        exitErrorf("Unable to open file %q, %v", err)

    defer file.Close()

    numBytes, err := downloader.Download(file,
            Bucket: aws.String(s3bucket),
            Key:    aws.String(item),
    if err != nil {
        exitErrorf("Unable to download item %q, %v", item, err)

    fmt.Println("Downloaded", file.Name(), numBytes, "bytes")

I would like to avoid having to create the directories and files in this example and just keep everything in memory. I left out the step that extracts the files from my code. The next step would be to upload the files with go-scp like so:

// Finaly, copy the file over
// Usage: CopyFile(fileReader, remotePath, permission)

client.CopyFile(f, "/path/to/remote/file", "0655")

My question would then focus on the file, err := os.Create(localitem) part of this code, how can I keep a filereader in memory instead of storing the file locally.

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 邀请回答

1条回答 默认 最新

  • doujian7132
    doujian7132 2018-07-31 10:27

    This is mentioned in the docs for Download:

    The w io.WriterAt can be satisfied by an os.File to do multipart concurrent downloads, or in memory []byte wrapper using aws.WriteAtBuffer.

    So use an aws.WriteAtBuffer instead of an *os.File:

    buf := new(aws.WriteAtBuffer)
    numBytes, err := downloader.Download(buf, &s3.GetObjectInput{
        Bucket: aws.String(s3bucket),
        Key:    aws.String(item),
    tr := tar.NewReader(bytes.NewReader(buf.Bytes()))
    // ...
    点赞 评论