I have to deal with huge integers in Golang that come in from a Swagger-defined REST API. Since Swagger needs a Validate(strfmt.Registry)
, I define my custom type like this:
// BigInt is a big.Int, but includes a Validate() method for swagger
// Once created, it can be used just like a big.Int.
type BigInt struct {
*big.Int
}
Since it needs to be transformed to and from JSON, I define some JSON Marshaling interface:
// UnmarshalJSON implements encoding/json/RawMessage.UnmarshalJSON
func (b *BigInt) UnmarshalJSON(data []byte) error {
err := json.Unmarshal(data, &b.Int)
if err != nil {
return err
}
return nil
}
// MarshalJSON calls json.Marshal() on the BigInt.Int field.
func (b *BigInt) MarshalJSON() ([]byte, error) {
if b == nil {
return []byte("null"), nil
}
return json.Marshal(b.Int)
}
Now I realized that my custom type doesn't actually behave exactly like big.Int
. In order to compare two BigInts:
example := BigInt{Int: &big.Int{}}
other := BigInt{Int: &big.Int{}}
example.Cmp(other.Int)
I cannot do
example.Cmp(other)
which is much cleaner. And creating the BigInt is also a terrible experience, which I have to wrap in a function like this:
// NewBigInt creates a BigInt with its Int struct field
func NewBigInt() (i *BigInt) {
return &BigInt{Int: &big.Int{}}
}
- Is this really how I'm supposed to do things?
- Why can't golang treat big.Int just like its other built in types like
int64/uint64/float64
?