drwg89980 2016-06-22 09:20
浏览 60
已采纳

Memcached花了太长时间来回应

I've a strange problem with memcached. I have searched here, here, here, here and few other places about my query. So I've got two pages index.php and index2.php (please don't mind file naming).

index.php contains the following code:

<?php
    $data = file_get_contents('test.txt');
    echo "done";

And index2.phpcontains the following code:

<?php
function file_get_contents_new($filename, $memcache){
    $time = filemtime($filename);
    $hash = md5($filename.$time);
    $content = $memcache->get($hash);
    if($content){
        return $content;
    }
    $content = file_get_contents($filename);
    $memcache->set($hash, $content, 10000);
    return $content;
}
$memcache = new Memcached;
$memcache->addServer('localhost', 11211);
file_get_contents_new('test.txt', $memcache);
echo "done";

There's one more file test.txt which has html source from a random site which is around 58967 characters which is around 57.6kb.

Now when I tried to profile index.php, I got the following profiling results (I'm using xdebug for profiling and phpstorm to view data):

file_get_contents shows 0% time

Now when I try to profile index2.php, I get the following snapshot: shows $memcached->get() takes a lot of time

We can see clearly that $memcache->get() is taking very long time which doesn't make much sense as I'm running Memcached on my local machine.

Then I thought maybe it's just some error and tried apache's benchmarking tool ab. The exact command I executed was ab -n 10000 -c 100 http://localhost/index.php Which was pretty fast and results were:

Server Software:        Apache/2.4.20
Server Hostname:        localhost
Server Port:            80

Document Path:          /index.php
Document Length:        4 bytes

Concurrency Level:      100
Time taken for tests:   0.555 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2030000 bytes
HTML transferred:       40000 bytes
Requests per second:    18025.33 [#/sec] (mean)
Time per request:       5.548 [ms] (mean)
Time per request:       0.055 [ms] (mean, across all concurrent requests)
Transfer rate:          3573.38 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     1    5   0.8      5      19
Waiting:        1    5   0.7      5      19
Total:          2    5   0.7      5      19

Percentage of the requests served within a certain time (ms)
  50%      5
  66%      6
  75%      6
  80%      6
  90%      6
  95%      7
  98%      7
  99%      8
 100%     19 (longest request)

Then I did the following test ab -n 10000 -c 100 http://localhost/index2.php

Server Software:        Apache/2.4.20
Server Hostname:        localhost
Server Port:            80

Document Path:          /index2.php
Document Length:        4 bytes

Concurrency Level:      100
Time taken for tests:   9.044 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2030000 bytes
HTML transferred:       40000 bytes
Requests per second:    1105.72 [#/sec] (mean)
Time per request:       90.439 [ms] (mean)
Time per request:       0.904 [ms] (mean, across all concurrent requests)
Transfer rate:          219.20 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    0   0.1      0       1
Processing:     6   79  71.1     76    5090
Waiting:        6   79  71.1     76    5090
Total:          7   79  71.1     76    5090

Percentage of the requests served within a certain time (ms)
  50%     76
  66%     78
  75%     79
  80%     81
  90%     85
  95%     89
  98%     93
  99%    107
 100%   5090 (longest request)

Which is very slow and is weird. Why is reading from memory slower than reading from secondary storage. Or did they implement some caching on file_get_contents

The computer I'm working is pretty strong and has following configuration:

  • MANJARO OS (Linux Kernel 4.1.26-1)
  • 16GB Primary Memory
  • 256GB SSD
  • Intel core i7 processor

Edit: As @ShiraNai7 commented, I tried to change my server URL to 127.0.0.1 and below are results from apache benchmarking tool

Server Software:        Apache/2.4.20
Server Hostname:        localhost
Server Port:            80

Document Path:          /index2.php
Document Length:        4 bytes

Concurrency Level:      100
Time taken for tests:   11.611 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      2030000 bytes
HTML transferred:       40000 bytes
Requests per second:    861.25 [#/sec] (mean)
Time per request:       116.111 [ms] (mean)
Time per request:       1.161 [ms] (mean, across all concurrent requests)
Transfer rate:          170.74 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    3  47.0      0    1009
Processing:     6  113  67.6    105     633
Waiting:        6  111  67.1    103     633
Total:          6  116  82.5    106    1197

Percentage of the requests served within a certain time (ms)
  50%    106
  66%    135
  75%    153
  80%    167
  90%    204
  95%    235
  98%    286
  99%    334
 100%   1197 (longest request)

Which is an improvement, but not a lot. And I can't see why would dns lookup take so long, since it's on /etc/hosts and it's sitting on my local machine.

Edit: I also tried to see if there's any APC going on, which I couldn't find I did find Opcache module. Is that why file_get_contents faster?

I've hosted a jsbin where you can see how my phpinfo looks like in my machine.

  • 写回答

1条回答 默认 最新

  • dsff788655567 2016-06-24 04:10
    关注

    Well, I found out the mystery behind this question. First clue was file_get_contents was very fast. Even if I'm using SSD, it shouldn't be that fast. So I dug around whole night and found some interesting information.

    It's because file_get_contents is also returning the cached information. PHP itself doesn't include caching, but linux system has built in file cache which makes it extremely fast to access data repetitively.

    Reference: page cache

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 乘性高斯噪声在深度学习网络中的应用
  • ¥15 运筹学排序问题中的在线排序
  • ¥15 关于docker部署flink集成hadoop的yarn,请教个问题 flink启动yarn-session.sh连不上hadoop,这个整了好几天一直不行,求帮忙看一下怎么解决
  • ¥30 求一段fortran代码用IVF编译运行的结果
  • ¥15 深度学习根据CNN网络模型,搭建BP模型并训练MNIST数据集
  • ¥15 C++ 头文件/宏冲突问题解决
  • ¥15 用comsol模拟大气湍流通过底部加热(温度不同)的腔体
  • ¥50 安卓adb backup备份子用户应用数据失败
  • ¥20 有人能用聚类分析帮我分析一下文本内容嘛
  • ¥30 python代码,帮调试,帮帮忙吧