douzai8285 2018-05-25 13:47
浏览 764

设置全局PHP-CURL上传和下载速度限制?

I use CURL in several individual PHP scripts to download / upload files, is there a way to set a GLOBAL (not per-curl-handle) UL / DL Rate Speed Limit?

Unfortunately, you can only set a speed limit for the single session at CURL, but that's not dynamic.

As a server OS Ubuntu is used, is there an alternative way to limit CURL processes differently?

Thanks

  • 写回答

1条回答 默认 最新

  • drqyxkzbs21968684 2018-05-25 17:06
    关注

    curl/libcurl doesn't have any feature to share a bandwidth limit across curl_easy handles, much less across different processes. i propose a curl daemon to enforce bandwidth limits instead. with the client looking something like

    class curl_daemon_response{
        public $stdout;
        public $stderr;
    }
    function curl_daemon(array $curl_options):curl_daemon_response{
        $from_big_uint64_t=function(string $i): int {
            $arr = unpack ( 'Juint64_t', $i );
            return $arr ['uint64_t'];
        };
        $to_big_uint64_t=function(int $i): string {
            return pack ( 'J', $i );
        };
        $conn = stream_socket_client("unix:///var/run/curl_daemon", $errno, $errstr, 3);
        if (!$conn) {
            throw new \RuntimeError("failed to connect to /var/run/curl_daemon! $errstr ($errno)");
        }
        stream_set_blocking($conn,true);
        $curl_options=serialize($curl_options);
        fwrite($conn,$to_big_uint64_t(strlen($curl_options)).$curl_options);
        $stdoutLen=$from_big_uint64_t(fread($conn,8));
        $stdout=fread($conn,$stdoutLen);
        $stderrLen=$from_big_uint64_t(fread($conn,8));
        $stderr=fread($conn,$stderrLen);
        $ret=new curl_daemon_response();
        $ret->stdout=$stdout;
        $ret->stderr=$stderr;
        fclose($conn);
        return $ret;
    }
    

    and the daemon looking something like

    <?php
    declare(strict_types=1);
    const MAX_DOWNLOAD_SPEED=1000*1024; // 1000 kilobytes
    const MINIMUM_DOWNLOAD_SPEED=100; // 100 bytes per second,
    class Client{
        public $id;
        public $socket;
        public $curl;
        public $arguments;
        public $stdout;
        public $stderr;
    }
    $clients=[];
    $mh=curl_multi_init();
    $srv = stream_socket_server("unix:///var/run/curl_daemon", $errno, $errstr);
    if (!$srv) {
      throw new \RuntimeError("failed to create unix socket /var/run/curl_daemon! $errstr ($errno)");
    }
    stream_set_blocking($srv,false);
    while(true){
        getNewClients();
        $cc=count($clients);
        if(!$cc){
            sleep(1); // nothing to do.
            continue;
        }
        curl_multi_exec($mh, $running);
        if($running!==$cc){
            // at least 1 of the curls finished!
            while(false!==($info=curl_multi_info_read($mh))){
                $key=curl_getinfo($info['handle'],CURLINFO_PRIVATE);
                curl_multi_remove_handle($mh,$clients[$key]->curl);
                curl_close($clients[$key]->curl);
                $stdout=file_get_contents(stream_get_meta_data($clients[$key]->stdout)['uri']); // https://bugs.php.net/bug.php?id=76268
                fclose($clients[$key]->stdout);
                $stderr=file_get_contents(stream_get_meta_data($clients[$key]->stderr)['uri']); // https://bugs.php.net/bug.php?id=76268
                fclose($clients[$key]->stderr);
                $sock=$clients[$key]->socket;
                fwrite($sock,to_big_uint64_t(strlen($stdout)).$stdout.to_big_uint64_t(strlen($stderr)).$stderr);
                fclose($sock);
                echo "finished request #{$key}!
    ";
                unset($clients[$key],$key,$stdout,$stderr,$sock);
            }
            updateSpeed();
        }
        curl_multi_select($mh);
    }
    
    function updateSpeed(){
        global $clients;
        static $old_speed=-1;
        if(empty($clients)){
            return;
        }
        $clientsn=count($clients);
        $per_handle_speed=MAX(MINIMUM_DOWNLOAD_SPEED,(MAX_DOWNLOAD_SPEED/$clientsn));
        if($per_handle_speed===$old_speed){
            return;
        }
        $old_speed=$per_handle_speed;
        echo "new per handle speed: {$per_handle_speed} - clients: {$clientsn}
    ";
        foreach($clients as $client){
            /** @var Client $client */
            curl_setopt($client->curl,CURLOPT_MAX_RECV_SPEED_LARGE,$per_hande_speed);
        }
    }
    
    
    function getNewClients(){
        global $clients,$srv,$mh;
        static $counter=-1;
        $newClients=false;
        while(false!==($new=stream_socket_accept($srv,0))){
            ++$counter;
            $newClients=true;
            echo "new client! request #{$counter}
    ";
            stream_set_blocking($new,true);
            $tmp=new Client();
            $tmp->id=$counter;
            $tmp->socket=$new;
            $tmp->curl=curl_init();
            $tmp->stdout=tmpfile();
            $tmp->stderr=tmpfile();
            $size=from_big_uint64_t(fread($new,8));
            $arguments=fread($new,$size);
            $arguments=unserialize($arguments);
            assert(is_array($arguments));
            $tmp->arguments=$arguments;
            curl_setopt_array($tmp->curl,$arguments);
            curl_setopt_array($tmp->curl,array(
                CURLOPT_FILE=>$tmp->stdout,
                CURLOPT_STDERR=>$tmp->stderr,
                CURLOPT_VERBOSE=>1,
                CURLOPT_PRIVATE=>$counter
            ));
            curl_multi_add_handle($mh,$tmp->curl);
        }
        if($newClients){
            updateSpeed();
        }
    }
    
    function from_big_uint64_t(string $i): int {
        $arr = unpack ( 'Juint64_t', $i );
        return $arr ['uint64_t'];
    }
    function to_big_uint64_t(int $i): string {
        return pack ( 'J', $i );
    }
    

    note: this is completely untested code, because my development environment died literally a couple of hours ago, and i wrote all this in notepad++. (my dev env won't boot at all, it's a VM, not sure wtf happened, but haven't fixed it yet )

    also, the code is not at all optimized for large file transfers, if you need to support big file transfers this way (sizes you don't want packed up in ram, like gigabytes+), then modify the daemon to return filepaths instead of writing all the data over an unix socket.

    评论

报告相同问题?

悬赏问题

  • ¥15 HFSS 中的 H 场图与 MATLAB 中绘制的 B1 场 部分对应不上
  • ¥15 如何在scanpy上做差异基因和通路富集?
  • ¥20 关于#硬件工程#的问题,请各位专家解答!
  • ¥15 关于#matlab#的问题:期望的系统闭环传递函数为G(s)=wn^2/s^2+2¢wn+wn^2阻尼系数¢=0.707,使系统具有较小的超调量
  • ¥15 FLUENT如何实现在堆积颗粒的上表面加载高斯热源
  • ¥30 截图中的mathematics程序转换成matlab
  • ¥15 动力学代码报错,维度不匹配
  • ¥15 Power query添加列问题
  • ¥50 Kubernetes&Fission&Eleasticsearch
  • ¥15 報錯:Person is not mapped,如何解決?