I want to extract the TXT Records of a particular domain in Go. I looked at a bunch of blogs and tried the following code:
package main
import (
"fmt"
"net"
)
func main() {
txts, err := net.LookupTXT("google.com")
if err != nil {
panic(err)
}
if len(txts) == 0 {
fmt.Printf("no record")
}
for _, txt := range txts {
fmt.Printf("%s
", txt)
}
}
When I execute this program, I get the following output.
docusign=05958488-4752-4ef2-95eb-aa7ba8a3bd0e
facebook-domain-verification=22rm551cu4k0ab0bxsw536tlds4h95
globalsign-smime-dv=CDYX+XFHUw2wml6/Gb8+59BsH31KzUr6c1l2BPvqKX8=
v=spf1 include:_spf.google.com ~all
This is working according to my requirement as I follow https://www.kitterman.com/spf/validate.html to validate if I am getting the correct output.
Now, whenever I change my Input Domain to geckoboard.com (say), I get the following error:
panic: lookup geckoboard.com on 127.0.0.53:53: read udp 127.0.0.1:38440->127.0.0.53:53: i/o timeout
goroutine 1 [running]:
main.main()
/home/maruthi/emailheader.go:11
+0x190 exit status 2
I get the fact that this is a Timeout Exception. However, when I run the same query on https://www.kitterman.com/spf/validate.html, I get the expected result within a fraction of seconds.
Is there any better way for extracting TXT Records other than using net.LookupTXT("google.com")
? If not, can someone suggest me a good retry mechanism for the same code with a higher timeout value?
Update 1: Tried the answer provided by @Florian Weimer but still getting a timeout.
$ dig +ignore +bufsize=512 geckoboard.com txt
; <<>> DiG 9.11.3-1ubuntu1.5-Ubuntu <<>> +ignore +bufsize=512 geckoboard.com txt
;; global options: +cmd
;; connection timed out; no servers could be reached
Update 2: As suggested by @ThunderCat, I set the timeout to a much higher value. I added options timeout:30
in resolver.conf . Both queries, the dig
and my program run for a period over 30 seconds before getting a timeout.