Nothing beats a pre-calculated lookup table, especially if it's stored in a slice or array (and not in a map), and the converter allocates a byte slice for the result with just the right size:
var byteBinaries [256][]byte
func init() {
for i := range byteBinaries {
byteBinaries[i] = []byte(fmt.Sprintf("%08b", i))
}
}
func strToBin(s string) string {
res := make([]byte, len(s)*8)
for i := len(s) - 1; i >= 0; i-- {
copy(res[i*8:], byteBinaries[s[i]])
}
return string(res)
}
Testing it:
fmt.Println(strToBin("\x01\xff"))
Output (try it on the Go Playground):
0000000111111111
Benchmarks
Let's see how fast it can get:
var texts = []string{
"\x00",
"123",
"1234567890",
"asdf;lkjasdf;lkjasdf;lkj108fhq098wf34",
}
func BenchmarkOrig(b *testing.B) {
for n := 0; n < b.N; n++ {
for _, t := range texts {
binConvertOrig(t)
}
}
}
func BenchmarkLookup(b *testing.B) {
for n := 0; n < b.N; n++ {
for _, t := range texts {
strToBin(t)
}
}
}
Results:
BenchmarkOrig-4 200000 8526 ns/op 2040 B/op 12 allocs/op
BenchmarkLookup-4 2000000 781 ns/op 880 B/op 8 allocs/op
The lookup version (strToBin()
) is 11 times faster and uses less memory and allocations. Basically it only uses allocation for the result (which is unavoidable).