It's because of the implementation-specific size of int
, 32 or 64 bits. Use int64
for consistent results. For example,
package main
import "fmt"
func main() {
var max int64 = 0
for i := int64(0); i < 1000000; i++ {
var len int64 = GetCollatzSeqLen(i)
if len > max {
max = len
}
}
fmt.Println(max)
}
func GetCollatzSeqLen(n int64) int64 {
var len int64 = 1
for n > 1 {
len++
if n%2 == 0 {
n = n / 2
} else {
n = 3*n + 1
}
}
return len
}
Output:
525
Playground: http://play.golang.org/p/0Cdic16edP
The Go Programming Language Specification
Numeric types
int32 the set of all signed 32-bit integers (-2147483648 to 2147483647)
int64 the set of all signed 64-bit integers (-9223372036854775808 to 9223372036854775807)
The value of an n-bit integer is n bits wide and represented using
two's complement arithmetic.
There is also a set of predeclared numeric types with
implementation-specific sizes:
uint either 32 or 64 bits
int same size as uint
To see the implementation-specific size of int
, run this program.
package main
import (
"fmt"
"runtime"
"strconv"
)
func main() {
fmt.Println(
"For "+runtime.GOARCH+" the implementation-specific size of int is",
strconv.IntSize, "bits.",
)
}
Output:
For amd64 the implementation-specific size of int is 64 bits.
On Go Playground: http://play.golang.org/p/7O6dEdgDNd
For amd64p32 the implementation-specific size of int is 32 bits.