doudi7782 2014-01-29 23:22
浏览 728
已采纳

Laravel队列 - 将数据传递到队列

I have an array containing about ~8,000 stock tickers that I'm trying to queue up; the queue is meant to receive the array of stock tickers ($symbols[]), and then pass each one to a worker / consumer (whichever jargon you prefer).

Here's what my QueueController current looks like:

Class QueueController extends \BaseController {
    public function stocks()
    {
        $symbols = $this->select_symbols();
        Queue::push('StockQueue', array('symbols' => $symbols));
    }
    ...
}

From my QueueController, I'm calling a method to retrieve the list of stock symbols and passing it to the StockQueue Class as the $data.

public function fire($job, $data)
    {
        $symbols = $data; // print_r shows all symbols...

        // Get Quote Data for Symbol
        $quote = $this->yql_get_quote($symbol);

        // Get Key Stats for Symbol
        $keystats = $this->yql_get_keystats($symbol);

        // Merge Quote and Keystats into an Array
        $array[] = $quote;
        $array[] = $keystats;

        // Save Data to DB
        $this->yql_save_results($array, $symbol);

        $job->delete();
    } 

This is not what I'm trying to achieve though; what I need to do is pass in each symbol, one by one, to the StockQueue Class, and have it process it as a task.

If I were to wrap the StockQueue->stocks() method in a while loop, it would try and pass all ~8,000 in (from what I understand) immediately to the queue. Would this be detrimental or is this the best way to do it? I haven't been able to find a lot of examples for PHP-based RPC Message Queuing online, so I'm just as curious about the best practices as I am on the correct process.

With that being said, how can I fire up multiple workers for this queue? Say, I want 5 workers (depending on how many resources each one takes; I'll figure that out) to process these tasks in order to reduce the processing time by ~4/5ths. How would I do that?

Would I just launch php artisan queue:listen five times?

And, for clarity, I'm using beanstalkd and supervisord to do the message queue / monitoring.

I look forward to your advice and insight.

  • 写回答

1条回答 默认 最新

  • dongxuandong2045 2014-01-31 20:29
    关注

    Yep, just run more workers. Beanstalkd can hold a number of connections open from lots of workers and make sure they all get different jobs. Just make sure that the job completes successfully (if not, deal with it appropriately - or at least bury it to look at later) and give it enough to complete, with some to spare in the TTR (Time To Run) setting.

    As for how to run more jobs - yes, just increase the number of jobs available in Supervisord (numprocs=5 in the [program:NAME] section) and have them start. I tended to have another (larger) pool of the same jobs, that don't start automatically, so I could start a couple more manually through the Supervisord control, as required.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 #MATLAB仿真#车辆换道路径规划
  • ¥15 java 操作 elasticsearch 8.1 实现 索引的重建
  • ¥15 数据可视化Python
  • ¥15 要给毕业设计添加扫码登录的功能!!有偿
  • ¥15 kafka 分区副本增加会导致消息丢失或者不可用吗?
  • ¥15 微信公众号自制会员卡没有收款渠道啊
  • ¥100 Jenkins自动化部署—悬赏100元
  • ¥15 关于#python#的问题:求帮写python代码
  • ¥20 MATLAB画图图形出现上下震荡的线条
  • ¥15 关于#windows#的问题:怎么用WIN 11系统的电脑 克隆WIN NT3.51-4.0系统的硬盘