dongni8969 2011-03-07 23:23
浏览 59
已采纳

如何在没有PHP Semaphore的情况下在PHP中实现信号量?

Question:

How can I implement shared memory variable in PHP without the semaphore package (http://php.net/manual/en/function.shm-get-var.php) ?

Context

  • I have a simple web application (actually a plugin for WordPress)
  • this gets a url
  • this then checks the database if that url already exists
  • if not then it goes out and does some operations
  • and then writes the record in the database with the url as unique entry

What happens in reality is that 4,5,6 ... sessions at the same time request the url and I get up to 9 duplicate entries in the database of the url.. (possibly 9 because the processing time and database write of the first entry takes just enough time to let 9 other requests fall through). After that all requests read the correct entry that the record already exists so that is good.

Since it is a WordPress plugin there will be many users on all kind of shared hosting platforms with variable compiles/settings of PHP.

So I'm looking for a more generic solution. I cant use database or text file writes since these will be too slow. while i write to the db the next session will already have passed.

fyi: the database code: http://plugins.svn.wordpress.org/wp-favicons/trunk/includes/server/plugins/metadata_favicon/inc/class-favicon.php

update

Using a unique key on a new md5 hash of the uri together with try catches around it seems to work.

I found 1 duplicate entry with

SELECT uri, COUNT( uri ) AS NumOccurrences
FROM edl40_21_wpfavicons_1
GROUP BY uri
HAVING (
COUNT( uri ) >1
)
LIMIT 0 , 30

So I thought it did not work but this was because they were:

http://en.wikipedia.org/wiki/Book_of_the_dead
http://en.wikipedia.org/wiki/Book_of_the_Dead

(capitals grin)

  • 写回答

2条回答 默认 最新

  • dousao1175 2011-03-07 23:44
    关注

    This could be achieved with MySQL.

    You could do it explicitly by locking the table from read access. This will prevent any read access from the entire table though, so may not be preferable. http://dev.mysql.com/doc/refman/5.5/en/lock-tables.html

    Otherwise if the field in the table is classified as unique, then when the next session tries to write the same URL to the table they will get an error, you can catch that error and continue as there's no need to do anything if the entry is already there. The only time wasted is the possibility of two or more sessions creating the same URL, the result is still one record, as the database won't add the same unique URL again.

    As discussed in comments, because the length of a URL could be very long, and fixed length unique hash can help overcome that issue.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(1条)

报告相同问题?

悬赏问题

  • ¥15 矩阵加法的规则是两个矩阵中对应位置的数的绝对值进行加和
  • ¥15 活动选择题。最多可以参加几个项目?
  • ¥15 飞机曲面部件如机翼,壁板等具体的孔位模型
  • ¥15 vs2019中数据导出问题
  • ¥20 云服务Linux系统TCP-MSS值修改?
  • ¥20 关于#单片机#的问题:项目:使用模拟iic与ov2640通讯环境:F407问题:读取的ID号总是0xff,自己调了调发现在读从机数据时,SDA线上并未有信号变化(语言-c语言)
  • ¥20 怎么在stm32门禁成品上增加查询记录功能
  • ¥15 Source insight编写代码后使用CCS5.2版本import之后,代码跳到注释行里面
  • ¥50 NT4.0系统 STOP:0X0000007B
  • ¥15 想问一下stata17中这段代码哪里有问题呀