I have a script that reads csv files and return them as an array. It reads many different files with different content, number of rows and columns and with different size (ex: from <1kb to >1Gb).
$data = [];
$file = fopen($file, 'r');
while (($line = fgetcsv($file)) !== FALSE ) {
$data[] = $line;
}
fclose($file);
I am looking for an elegant way to make sure that when it comes across large files it does not go over the whole file in order to avoid an excessive workload.
Here are some ideas :
Stopping after n rows. However, for a large file that have few rows and several columns, that wouldn't help
Read rows based on the size. Ex: Read all the rows of a 500Mb file but only 50% of the rows of a 1Gb file. However, if the last rows are a bunch of empty rows or if they are more columns in the first rows, that solution wouldn't help.
Monitor the size of
$data
and stop after a certain threshold. I do not have any rational for excluding this idea except that I am not sure that it is a common way of doing things