I wrote a function to convert a byte slice to an integer.
The function I created is actually a loop-based implemtation of what Rob Pike published here:
http://commandcenter.blogspot.com/2012/04/byte-order-fallacy.html
Here is Rob's code:
i = (data[0]<<0) | (data[1]<<8) | (data[2]<<16) | (data[3]<<24);
My first implementation (toInt2 in the playground) doesn't work as I expected because it appears to initialize the int value as a uint. This seems really strange but it must be platform specific because the go playground reports a different result than my machine (a mac).
Can anyone explain why these functions behave differently on my mac?
Here's the link to the playground with the code: http://play.golang.org/p/FObvS3W4UD
Here's the code from the playground (for convenience):
/*
Output on my machine:
amd64 darwin go1.3 input: [255 255 255 255]
-1
4294967295
Output on the go playground:
amd64p32 nacl go1.3 input: [255 255 255 255]
-1
-1
*/
package main
import (
"fmt"
"runtime"
)
func main() {
input := []byte{255, 255, 255, 255}
fmt.Println(runtime.GOARCH, runtime.GOOS, runtime.Version(), "input:", input)
fmt.Println(toInt(input))
fmt.Println(toInt2(input))
}
func toInt(bytes []byte) int {
var value int32 = 0 // initialized with int32
for i, b := range bytes {
value |= int32(b) << uint(i*8)
}
return int(value) // converted to int
}
func toInt2(bytes []byte) int {
var value int = 0 // initialized with plain old int
for i, b := range bytes {
value |= int(b) << uint(i*8)
}
return value
}