doudang4857 2012-03-28 06:49
浏览 39
已采纳

使用postgreSQL在PHP中执行Async准备语句并忽略其结果

I have some kind of (basic) logging with user actions in a postgreSQL database. In order to gain performance, I execute all log inserts asynchronously to let the script continue and not wait until log entry is created.

I use prepared statements everywhere to prevent SQL injections and load them in an as-needed basis.

The problem comes when there are pending results to be fetched from a previous async query when I prepare a statement. (PostgreSQL says there are pending results to be fetched prior to prepare a new statement)

So as a workarround, I gather all pending results (if any) and ignore them to make PHP and PostgreSQL happy before preparing any statement.

But with that workarround (as I see it), I miss the performance I could gain by executing asyncronously as I have to gather the results anyway.

Is there any way to asynchronously execute a prepared statement and deliberatelly tell postgres to ignore results?

Inside my PostgreSQL class, I am calling prepared statements with

pg_send_execute($this->resource, $name, $params);

and prepairing them with

//Just in case there are pending results (workarround)
while (pg_get_result($this->resource)!==FALSE);
$stmt = pg_prepare($this->resource, $stmtname, $query);

Any help will be apreciated.

UPDATE: All asynchronous queries I am using are only INSERT ones, so it should be safe (theoretically) to ignore their results.

  • 写回答

1条回答 默认 最新

  • douyu9012 2012-05-05 16:45
    关注

    The only thing that is asynchronous is your communication with PostgreSQL server - your database has to process everything sequentially.

    My proposal:

    If you have to use PostgreSQL for logging, use a separate database connection for logging purposes and get a connection pool sitting between your script and database - auth in PostgreSQL is costly and takes some time, this will cut it down. Acquiring a second connection will take some time, but if you use this method it will be faster than one without connection pool.

    Depending on your reliability requirements you should use autocommit (to never lose a log entry when PHP crashes). You may want to use an UNLOGGED table (available since PostgreSQL 9.1) if you don't care about reliability on databse end (faster inserts as your data skips WAL) or if you don't use replication or don't need to have logs replicated.

    As a speed optimization, your log table should have no indexes because they would have to be updated on each insert. If you need them, create a second table and move data in a batch (every X minutes or every hour).

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论

报告相同问题?

悬赏问题

  • ¥15 请问为什么我配置IPsec后PC1 ping不通 PC2,抓包出来数据包也并没有被加密
  • ¥200 求博主教我搞定neo4j简易问答系统,有偿
  • ¥15 nginx的使用与作用
  • ¥100 关于#VijeoCitect#的问题,如何解决?(标签-ar|关键词-数据类型)
  • ¥15 一个矿井排水监控系统的plc梯形图,求各程序段都是什么意思
  • ¥50 安卓10如何在没有root权限的情况下设置开机自动启动指定app?
  • ¥15 ats2837 spi2从机的代码
  • ¥200 wsl2 vllm qwen1.5部署问题
  • ¥100 有偿求数字经济对经贸的影响机制的一个数学模型,弄不出来已经快要碎掉了
  • ¥15 数学建模数学建模需要