2010-09-30 16:19



I'm working on a PHP webapp that does automatic resizing of images and I'm thinking of storing the cached copies on the NFS mounted NAS so it's easy for me to flush the cache when images are updated.

The only thing I'm worried about is what happens in general with NFS if 2 or more of the servers in the cluster are trying to create the same image cache file at the same time?

There's a pretty good chance that when the cache gets flushed for content updates that they could collide like that, but I don't have a great way to test this scenario in development because I'm only working on a single box.

Anyone with experience on this?

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答


  • douzhanrun0497 douzhanrun0497 11年前

    It depends on how you open the file. If you open the file in "append" mode then Unix/Linux will actually write the contents to a cache until you create a new-line character, then it sticks the new line on to the end of the file (over-writing the "end-of-file" byte pattern) and writes a new "end-of-file". In this case if two people try to write to the same file simultaneously then both of the write lines will go through, attaching themselves one line at a time in the order that they were received. Thus you could expect something like:

    This was the old contents
    of the file
    The first script added
    The second script added
    this line (script 1)
    this line (script 2)

    In the rare chance that two "write" commands arrive at EXACTLY the same time (down to nanosecond precision) then the Operating System actually creates an interrupt state. It's up to the OS how it handles this, but most will just generate two random numbers to decide who goes first.

    If you open the file in "write" mode (say you wanted to add content to the middle) then you actually have to lock the file to do this. The second PHP script will throw an error saying that it couldn't open the file.

    点赞 10 评论 复制链接分享