dpepbjp126917 2013-09-08 18:18
浏览 72
已采纳

如何在PHP中的多个get请求之间共享对象?

I created a small and very simple REST-based webservice with PHP. This service gets data from a different server and returns the result. It's more like a proxy rather than a full service.

Client --(REST call)--> PHP Webservice --(Relay call)--> Remote server
                        <-- Return data ---

In order to keep costs as low as possible I want to implement a caching table on the PHP webservice system by maintaining data for a period of time in server memory and only re-request the data after a timeout (let's say after 30 mins).

In pseudo-code I basically want to do this:

$id = $_GET["id"];
$result = null;

if (isInCache($id) && !cacheExpired($id, 30)){
    $result = getFromCache($id);
}
else{
    $result = getDataFromRemoteServer($id);
    saveToCache($result);
}

printData($result);

The code above should get data from a remote server which is identified by an id. If it is in the cache and 30 mins have not passed yet the data should be read from the cache and returned as a result of the webservice call. If not, the remote server should be queried.

While thinking on how to do this I realized 2 important aspects:

  1. I don't want to use filesystem I/O operation because of performance concerns. Instead, I want to keep the cache in memory. So, no MySQL or local file operations.
  2. I can't use sessions because the cached data must be shared across different users, browsers and internet connections worldwide.

So, if I could somehow share objects in memory between multiple GET requests, I would be able to implement this caching system pretty easily I think.

But how could I do that?


Edit: I forgot to mention that I cannot install any modules on that PHP server. It's a pure "webhosting-only" service.

  • 写回答

2条回答 默认 最新

  • drktvjp713333 2013-09-08 18:24
    关注

    I would not implement the cache on the (PHP) application level. REST is HTTP, therefore you should use a caching HTTP proxy between the internet and the web server. Both servers, the web server and the proxy could live on the same machine as long as the application grows (if you worry about costs).

    I see two fundamental problems when it comes to application or server level caching:

    • using memcached would lead to a situation where it is required that a user session is bound to the physical server where the memcache exists. This makes horizontal scaling a lot more complicated (and expensive)

    • software should being developed in layers. caching should not being part of the application layer (and/or business logic). It is a different layer using specialized components. And as there are well known solutions for this (HTTP caching proxy) they should being used in favour of self crafted solutions.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥15 delta降尺度计算的一些细节,有偿
  • ¥15 Arduino红外遥控代码有问题
  • ¥15 数值计算离散正交多项式
  • ¥30 数值计算均差系数编程
  • ¥15 redis-full-check比较 两个集群的数据出错
  • ¥15 Matlab编程问题
  • ¥15 训练的多模态特征融合模型准确度很低怎么办
  • ¥15 kylin启动报错log4j类冲突
  • ¥15 超声波模块测距控制点灯,灯的闪烁很不稳定,经过调试发现测的距离偏大
  • ¥15 import arcpy出现importing _arcgisscripting 找不到相关程序