duanrong6802
2019-02-16 16:59
浏览 585
已采纳

读取时文件被截断

I am writing some json results in files in PHP on shared hosting (fwrite).

Then I read those files to extract json results (file_get_contents).

It happens some times (maybe one out of more than one thousand) that when I read this file it appears truncated: I can only read a multiple of the first 32768 bytes of the file.

I added some code to copy/paste the file I am reading in case the json string is not valid, and I then get 2 different files: the original one was correctly written as it contains a valid json string and the copied one contains only the beginning of the original one and has a size of x*32768 bytes.

Would you have any idea of what could be the problem and how to solve this? (I don't know how to investigate further)

Thank you

图片转代码服务由CSDN问答提供 功能建议

我在共享主机(fwrite)上的PHP文件中写了一些json结果。 \ n

然后我读取这些文件以提取json结果(file_get_contents)。

它发生了一些时间(可能超过一千个中的一个),当我读到这个文件时它会出现 截断:我只能读取文件的前32768个字节的倍数。

我添加了一些代码来复制/粘贴我正在读取的文件,以防json字符串无效, 然后我得到2个不同的文件:原始文件正确写入,因为它包含一个有效的json字符串,复制的文件只包含原始字符串的开头,大小为x * 32768字节。 \ n

您是否知道可能出现的问题以及如何解决这个问题? (我不知道如何进一步调查)

谢谢

  • 写回答
  • 好问题 提建议
  • 关注问题
  • 收藏
  • 邀请回答

2条回答 默认 最新

  • dream3323 2019-02-18 18:32
    已采纳

    As suggested by @UlrichEckhardt comment, it was due to read / write concurrency problem. I was trying to read a file that was being writen. I solved this by just waiting before trying to read the file again

    已采纳该答案
    评论
    解决 无用
    打赏 举报
  • doupu2722 2019-02-17 16:17

    Without example code it is impossible to give a 'fix my code' answer, but when doing file write/read sort of programming, you should follow a simple process (which, from the description, is missing one fairly critical step!)

    First, write to a TEMP file (you are writing to a file, but it is important here to write to a TEMP file - otherwise, you could have race conditions....... ;);

    an easy way to do that in php

    $yourData = "whateverYourDataIs....";
    $goodfilename = 'whateverYourGoodFileNameIsForYourData.json';
    $tempfilename = 'tempfile' . time(); // MANY ways to do this (lots of SO posts on it - just get a unique name every time you write ('unique' may not be needed if you only occasionally do a write, but it is a good safety measure to avoid collisions and time() works for many programs.)
    // Now, use $tempfilename in your fwrite.
    $fwrite = fwrite($tempfilename,$yourData);
    if ($fwrite === false) { 
        // the write failed, so do whatever 'error' function you may need
        // since it failed, there should be no file, but not a bad idea to attempt to delete it
        unlink ($tempfile);
    }
    else {
        // the write succeeded, so let's do a 'sanity check' on the file to make sure it is good JSON (this is a 'paranoid' check, but "better safe than sorry", right?)
        if(json_decode($tempfile)){
            // we know the file is good JSON, so now RENAME (this is really fast, so collisions are almost impossible)  NOTE: see http://php.net/manual/en/function.rename.php comments for some potential challenges and workarounds if you have trouble with rename.
            rename($tempfilename,$goodfilename);
        }
        // Now, the GOOD file will contain your new data - and those read issues are gone! (though never say 'never' - it may be possible, but very unlikely!)  
    }
    

    This may/not be your issue directly and you will have to suit this to fit your code, but as a safety factor - and a good way to avoid collisions, it should give you ~100% read success, which I believe is what you are after!)

    If this doesn't help, then some direct code will be needed to provide a more complete answer.

    评论
    解决 无用
    打赏 举报

相关推荐 更多相似问题