I'm working on a system that needs to get a country code based on IP address, and needs to be accessible to multiple applications of all shapes and sizes on multiple servers.
At the moment, this is obtained via a cURL request to a preexisting geo.php library, which I think resolves the country code from a .dat file downloaded from MaxMind. Apparently though this method has been running into problems under heavy loads, perhaps due to a memory leak? No one's really sure.
The powers that be have suggested to me that we should dispense with the cURLing and derive the countrycode from a locally based geocoding library with the data also stored in a local file. Or else possibly a master file hosted on, e.g., Amazon S3. I'm feeling a bit wary of having a massive file of IP-to-country lookups stored unnecessarily in a hundred different places, of course.
One thing I've done is put the data in a mysql database and obtained the required results by connecting to that; I don't know for sure, but it seems to me that our sites generally run swiftly and efficiently while connecting to centralised mysql data, so wouldn't this be a good way of solving this particular problem?
My question then: what are the relative overheads of obtaining data in different ways? cURLing it in, making a request to a remote database, getting it from a local file, getting it from a file hosted somewhere else? It's difficult to work out which of these are more efficient or inefficient, and whether the relative gains in efficiency are likely to be big enough to matter...