2011-09-08 12:07



I'm working on a system that needs to get a country code based on IP address, and needs to be accessible to multiple applications of all shapes and sizes on multiple servers.

At the moment, this is obtained via a cURL request to a preexisting geo.php library, which I think resolves the country code from a .dat file downloaded from MaxMind. Apparently though this method has been running into problems under heavy loads, perhaps due to a memory leak? No one's really sure.

The powers that be have suggested to me that we should dispense with the cURLing and derive the countrycode from a locally based geocoding library with the data also stored in a local file. Or else possibly a master file hosted on, e.g., Amazon S3. I'm feeling a bit wary of having a massive file of IP-to-country lookups stored unnecessarily in a hundred different places, of course.

One thing I've done is put the data in a mysql database and obtained the required results by connecting to that; I don't know for sure, but it seems to me that our sites generally run swiftly and efficiently while connecting to centralised mysql data, so wouldn't this be a good way of solving this particular problem?

My question then: what are the relative overheads of obtaining data in different ways? cURLing it in, making a request to a remote database, getting it from a local file, getting it from a file hosted somewhere else? It's difficult to work out which of these are more efficient or inefficient, and whether the relative gains in efficiency are likely to be big enough to matter...

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答


  • dongzhouji4021 dongzhouji4021 10年前

    I had a website using cURL to get the country code text from maxmind as well for about 1.5 years with no problems as far as I could tell. One thing that I did do though was set a timeout of ~1-2 seconds for the cURL request and default back to a set country code if it didn't hit it. We went through about 1 million queries to maxmind I believe, so it must have been used.... If it didn't reach it in that time, I didn't want to slow the page anymore. That's the main disadvantage of using an external library - relying on their response time.

    As for having it locally, the main thing to be concerned about is: will it be up to date a year from now? Obviously you can't get any more different IP addresses out of the current IPv4 pool, but potentially ISPs could buy/sell/trade IPs with different countries (I don't know how it works, but I've seen plenty of IPs from different countries and they never seem to have any pattern to them lol). If that doesn't happen, disregard that part :p. The other thing about having it locally is you could use mysql query cache to store the result so you don't have to worry about resources on subsequent page loads, or alternatively just do what I did and store it in a Cookie and check that first before cURLing (or doing a lookup).

    点赞 评论 复制链接分享
  • douduanque5850 douduanque5850 10年前

    You state this question wrong way.
    There are only two different methods:

    • a network lookup
    • a local resource request

    And only one answer:

    NEVER do any network lookups while serving client request.

    So, as long as you're accessing local resource (okay - in the limits of the same datacenter) - you're all right.
    If you're requesting some distant resource - no matter it's curl or database or whatever - you're in trouble.

    That rule seems obvious for me.

    点赞 评论 复制链接分享