I recently wrote a code where I send a normal PUT request which has contents of a binary file in the request body with the Content-Type
header set to the mime type of said file. In the server, I handle that by just reading the raw request body in a variable, employing some validation: size (using strlen
) and mime types (using finfo->buffer
) and then copying the contents to a file (using file_put_contents
).
The usual (or popular, I guess) way of uploading files would be using multipart/form-data
encoding, which would be handled automatically by PHP. In that case, PHP would write the contents in a temporary file, which I believe exists in a storage device rather than memory.
Now, I went through the code in https://github.com/php/php-src/blob/master/main/rfc1867.c , and it seemed to put the request body in memory anyways.
My question is:
0) Does PHP store the full request body in memory (at least once), regardless of the enctype
it gets?
1) Does getting all the raw body and writing it to a file (using file_put_contents
) uses more memory compared to using php's internal multipart/form-data
handling mechanism to upload a file?
1a) Does php's copy-on-write mechanism help me here? that is to say am I allocating new blocks of memory when I store the raw body content in some variable (and not a file)? Or am I just pointing to the request data that is already on the memory (if there is) ?
2) If what I'm doing does impact memory, do you have any suggestions for improvement that does not involve using multipart/form-data
?
PS: I do not need to send anything else (e.g., metadata) with the file, nor do I have the need to send multiple files in a single request.
PPS: Sorry if my post is incomprehensible, I'd be happy to provide other details if anything seems confusing or incomplete.
EDIT
- Added Question 0.
- Slightly Modified questions in hopes of making it less confusing.