Most examples I've seen to update text based log files seem to suggest checking that the file exists, if so load it into a big string with file_get_contents(), add your new logs onto it, and then writing it back with file_put_contents().
I may be over-thinking this, but I think I see two problems there. First, if the log file gets big, isn't it somewhat wasteful of the the scripts available memory to stuff the huge file contents into a variable? Second, it seems that if you did any processing between the 'get' and 'put', you risk the possibility that multiple site visitors may update between the two calls, resulting in lost log info.
So for a script that is simply called (GET or POST) and exited after doing some work, wouldn't it be better to just build up your current (shorter) log string to be written, and at then just before exit(), just open in APPEND mode and WRITE?
It would seem that either approach could lead to losing data if there were no LOCK on the file between get and put. In the case of file_get/put_contents, I see that method does have a flag available called "LOCK_EX", which I assume attempts to prevent that occurrence. But then there is that issue of the time taken to move a large file into an array, and add to it before writing back. Wouldn't it be better to use fopen (append) with some kind of 'lock', between the fopen() and the fwrite()?
I apologise as I DO understand that "best way to do something" questions are not appreciated by the community. But surely the is a preferred way that addresses the concerns I'm raising?
Thanks for any help.