I'm currently writing a small program which converts CSV-files into structs to be used for further prosessing. The csv lines look like this
20140102,09:30,38.88,38.88,38.82,38.85,67004
I have 500 files, each about 20-30 MB. My code works just fine, but I can't help wondering if there isn't a better way to convert these files than what I'm doing now. First reading the file and converting to csv records (pseudo code)
data, err := ioutil.ReadFile(path)
if err != nil {
...
}
r := csv.NewReader(bytes.NewReader(data))
records, err := r.ReadAll()
if err != nil {
...
}
Then looping over all the records and doing
parsedTime, err := time.Parse("2006010215:04", record[0]+record[1])
if err != nil {
return model.ZorroT6{}, time.Time{}, err
}
t6.Date = ConvertToOle(parsedTime)
if open, err := strconv.ParseFloat(record[2], 32); err == nil {
t6.Open = float32(open)
}
if high, err := strconv.ParseFloat(record[3], 32); err == nil {
t6.High = float32(high)
}
if low, err := strconv.ParseFloat(record[4], 32); err == nil {
t6.Low = float32(low)
}
if close, err := strconv.ParseFloat(record[5], 32); err == nil {
t6.Close = float32(close)
}
if vol, err := strconv.ParseInt(record[6], 10,32); err == nil {
t6.Vol = int32(vol)
}
For example I have to go through []byte -> string -> float64 -> float32 to get my float values. What could I do to improve this code?
EDIT: Just to be clear I don't really need to improve the performance, I'm just better trying to understand Go and what performance optimization that could be applied to a problem like this. For example it seems like a lot of overhead to create loads of strings and float64 when I have a byte slice and want a float32.