I have been playing around with Go and I ran into a (non?)feature of Go while running the following code:
a := 1 //int
b := 1.0 //float64
c := a/b //should be float64
When I ran this I get the following run-time error:
invalid operation: a / b (mismatched types int and float64)
I thought GoLang was supposed to be pretty good with type inference. Why should it be necessary for me to write:
c := float64(a)/b //float64
In general, given two number types, c should be inferred to be the smallest type that contains both. I can't see this as being an oversight, so I am just trying to figure out why this behavior was decided upon. For readability reasons only? Or would my suggested behavior cause some kind of logical inconsistency in language or something?