I am new to laravel and php. I am working on a problem to insert a large data into database, which takes a lot of time. So I was thinking to divide the data into chunks and create multiple threads to insert those chunks of data. I am using php 7.2.4 and my laravel 5.5.
Here is my code.
here is my code:
$row = 1;
$contactNumberList = [];
if (($handle = fopen($path . "/" . $fileName . "." . $fileExt ."", "r")) !== FALSE) {
while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) {
$row++;
$contact_number = $this->filter_contact($data[0]);
if($contact_number != null){
$contactNumberList[] = ['contact_number' => $contact_number];
}
}
fclose($handle);
}
//insert records
\DB::table('contacts_'. $list_id)->insert($contactNumberList);
laravel 5.5中是否可以进行多线程处理?
- 写回答
- 好问题 0 提建议
- 追加酬金
- 关注问题
- 邀请回答
-
1条回答 默认 最新
- dpdp42233 2018-09-06 05:48关注
You can leverage Laravel queues. I quote from Laravel's documentation
Laravel queues provide a unified API across a variety of different queue backends [...]. Queues allow you to defer the processing of a time consuming task, such as sending an email, until a later time. Deferring these time consuming tasks drastically speeds up web requests to your application.
If you run multiple queue workers at the same time you can achieve some sort of "multi-threading" (I know is not the same thing but the effect is the same, parallel processing)
You could create an Artisan command that splits the import batch in several chunks, and then dispatches multiple queue jobs, one per each chunk.
You didn't mention the source of the data you want to import so is difficult to provide more specific details for the implementation, but the key is not to send chunks of data to each job but instead provide them with the information to extract the data from the batch by themselves. That way you don't store huge amounts of data in your queue, which has limited message storage capacity.
For instance, assuming your data comes from a file, your artisan command could read the number of lines in the file, divide it by the number of records per chunk you want to process, and create a job with parameters that indicate which lines to process from the file.
解决 无用评论 打赏 举报