duanrong6802
duanrong6802
2019-02-16 16:59

读取时文件被截断

已采纳

I am writing some json results in files in PHP on shared hosting (fwrite).

Then I read those files to extract json results (file_get_contents).

It happens some times (maybe one out of more than one thousand) that when I read this file it appears truncated: I can only read a multiple of the first 32768 bytes of the file.

I added some code to copy/paste the file I am reading in case the json string is not valid, and I then get 2 different files: the original one was correctly written as it contains a valid json string and the copied one contains only the beginning of the original one and has a size of x*32768 bytes.

Would you have any idea of what could be the problem and how to solve this? (I don't know how to investigate further)

Thank you

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享
  • 邀请回答

2条回答

  • dream3323 dream3323 2年前

    As suggested by @UlrichEckhardt comment, it was due to read / write concurrency problem. I was trying to read a file that was being writen. I solved this by just waiting before trying to read the file again

    点赞 评论 复制链接分享
  • doupu2722 doupu2722 2年前

    Without example code it is impossible to give a 'fix my code' answer, but when doing file write/read sort of programming, you should follow a simple process (which, from the description, is missing one fairly critical step!)

    First, write to a TEMP file (you are writing to a file, but it is important here to write to a TEMP file - otherwise, you could have race conditions....... ;);

    an easy way to do that in php

    $yourData = "whateverYourDataIs....";
    $goodfilename = 'whateverYourGoodFileNameIsForYourData.json';
    $tempfilename = 'tempfile' . time(); // MANY ways to do this (lots of SO posts on it - just get a unique name every time you write ('unique' may not be needed if you only occasionally do a write, but it is a good safety measure to avoid collisions and time() works for many programs.)
    // Now, use $tempfilename in your fwrite.
    $fwrite = fwrite($tempfilename,$yourData);
    if ($fwrite === false) { 
        // the write failed, so do whatever 'error' function you may need
        // since it failed, there should be no file, but not a bad idea to attempt to delete it
        unlink ($tempfile);
    }
    else {
        // the write succeeded, so let's do a 'sanity check' on the file to make sure it is good JSON (this is a 'paranoid' check, but "better safe than sorry", right?)
        if(json_decode($tempfile)){
            // we know the file is good JSON, so now RENAME (this is really fast, so collisions are almost impossible)  NOTE: see http://php.net/manual/en/function.rename.php comments for some potential challenges and workarounds if you have trouble with rename.
            rename($tempfilename,$goodfilename);
        }
        // Now, the GOOD file will contain your new data - and those read issues are gone! (though never say 'never' - it may be possible, but very unlikely!)  
    }
    

    This may/not be your issue directly and you will have to suit this to fit your code, but as a safety factor - and a good way to avoid collisions, it should give you ~100% read success, which I believe is what you are after!)

    If this doesn't help, then some direct code will be needed to provide a more complete answer.

    点赞 评论 复制链接分享