I've found a few questions that are similar to mine, but nothing that answers my specific question.
I want to upload CSV data to s3. My basic code is along the lines of (I've simplified getting the data for brevity, normally it's reading from a database):
reader, writer := io.Pipe()
go func() {
cWriter := csv.NewWriter(pWriter)
for _, line := range lines {
cWriter.Write(line)
}
cWriter.Flush()
writer.Close()
}()
sess := session.New(//...)
uploader := s3manager.NewUploader(sess)
result, err := uploader.Upload(&s3manager.UploadInput{
Body: reader,
//...
})
The way I understand it, the code will wait for writing to finish and then will upload the contents to s3, so I end up with the full contents of the file in memory. Is it possible to chunk the upload (possibly using the s3 multipart upload?) so that for larger uploads, I'm only storing part of the data in memory at any one time?