dongshiliao7990 2008-12-11 22:15
浏览 65
已采纳

使用PHP和Linux更好地支持CURL

I'm the developer of twittertrend.net, I was wondering if there was a faster way to get headers of a URL, besides doing curl_multi? I process over 250 URLs a minute, and I need a really fast way to do this from a PHP standpoint. Either a bash script could be used and then output the headers or C appliation, anything that could be faster? I have primarily only programmed in PHP, but I can learn. Currently, CURL_MULTI (with 6 URLs provided at once, does an ok job, but I would prefer something faster? Ultimately I would like to stick with PHP for any MySQL storing and processing.

Thanks, James Hartig

  • 写回答

7条回答 默认 最新

  • doulianxi0587 2008-12-12 05:27
    关注

    I think you need a multi-process batch URL fetching daemon. PHP does not support multithreading, but there's nothing stopping you from spawning multiple PHP daemon processes.

    Having said that, PHP's lack of a proper garbage collector means that long-running processes can leak memory.

    Run a daemon which spawns lots of instances (a configurable, but controlled number) of the php program, which will of course have to be capable of reading a work queue, fetching the URLs and writing the results away in a manner which is multi-process safe; multiple procs shouldn't end up trying to do the same work.

    You'll want all of this to run autonomously as a daemon rather than from a web server. Really.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(6条)

报告相同问题?

悬赏问题

  • ¥15 运筹学排序问题中的在线排序
  • ¥15 关于docker部署flink集成hadoop的yarn,请教个问题 flink启动yarn-session.sh连不上hadoop,这个整了好几天一直不行,求帮忙看一下怎么解决
  • ¥30 求一段fortran代码用IVF编译运行的结果
  • ¥15 深度学习根据CNN网络模型,搭建BP模型并训练MNIST数据集
  • ¥15 C++ 头文件/宏冲突问题解决
  • ¥15 用comsol模拟大气湍流通过底部加热(温度不同)的腔体
  • ¥50 安卓adb backup备份子用户应用数据失败
  • ¥20 有人能用聚类分析帮我分析一下文本内容嘛
  • ¥30 python代码,帮调试,帮帮忙吧
  • ¥15 #MATLAB仿真#车辆换道路径规划