2015-01-08 06:10 阅读 41


I run php script which select/insert to postgresql database.

The problem is that when i run the script to select more than (400000) rows,

the script repeats the insertion for the same rows!

I found this error in postgresql log :

2014-12-21 13:14:53 AST LOG:  checkpoints are occurring too frequently (7 seconds apart)
2014-12-21 13:14:53 AST HINT:  Consider increasing the configuration parameter "checkpoint_segments".

i changed the configuration to be :

- Checkpoints -
checkpoint_segments = 100       # in logfile segments, min 1, 16MB each
checkpoint_timeout = 1h     # range 30s-1h
checkpoint_completion_target = 0.9  # checkpoint target duration, 0.0 - 1.0
checkpoint_warning = 30s        # 0 disables

but the problem still occur. can anyone let me know how to fix this?


i start my script with a prepare query:

// * Connect to Specified Database
$dbconn = pg_connect("host=localhost port=5432 dbname=postgis user=postgres password=****");

    if (!$dbconn) { die("Error in connection: " . pg_last_error());}

// connected to Database

            //We prepare the PostgreSQL next messages query. At this stage it is sent off to the Database server.

                $nextMessageQuery = pg_prepare($dbconn, 'selectNextMessage',  "
                            history AS (select id ,userid,assigning_date from userid_history 
                            where id=$2 and assigning_date <$4),
                            accounttbl AS(select timestamp,userid,position from account
                            where timestamp >$1 and position is not null
                            and timestamp between $3 and $4)

                        select  p.value1 ,value2.value3,p.value4, m.userid 
                        from (
                            SELECT min(next.timestamp)AS value3,next.userid,  from(
                            from history 
                            inner join accounttbl
                            accounttbl.userid=history.userid and 
                            accounttbl.timestamp > history.assigning_date and 
                                ( accounttbl.timestamp <(select min(assigning_date) from history h 
                                                    where and
                                                    (select min(assigning_date) from history h 
                                                        where and h.assigning_date>history.assigning_date)  is null
                            where timestamp >$1
                            )next GROUP BY next.userid, 
                        ) value2
                        JOIN nmea m on  m.timestamp=value2.value3 and m.userid=value2.userid ,places p 
                        WHERE  m.position is not null and ST_DWithin(m.position,p.position,0.0217130577252428)order by ST_Distance(p.position,m.position)");


this is just to show you how complicated my queries are.

then select #rows, and insert or update into new table based on many comparing statements

all are in one complicated script. Do I have to post it all?

  • 点赞
  • 写回答
  • 关注问题
  • 收藏
  • 复制链接分享

1条回答 默认 最新

  • 已采纳
    dream6120 dream6120 2015-01-13 10:36

    Thank you all ,

    as Craig mentioned there was a bug in the data itself not in the data loading.

    actually, there is a replication in certain records that we select!

    so i used :

    SELECT DISTINCT on (time , id) time , id ,.... FROM table

    点赞 评论 复制链接分享