douyueqing1530 2013-09-28 04:38
浏览 37
已采纳

Mysql插入大数据

for now im trying to optimize some codes..

What is a better way to insert a large data into table?

Consider this code is running.

$arrayOfdata = execute query (SELECT c.id FROM table1 ) 

getting all the data from table1 storing it to array and inserting it on the table.

 private function insertSomeData($arrayOfdata, $other_id){
      foreach($arrayOfdata as $data){
                INSERT INTO table (
                    other_id,
                    other2_id,
                    is_transfered
                ) VALUES ('.$other_id.', '.$data['id'].', 0)'
            }
    }

i know if it have 500k of data in table1 this code is very slow. so i tried something like this.. i put all in one sql query

INSERT INTO 
table (
    other_id,
    other2_id,
    is_transfered
) 
SELECT 
    "other_id", c.id, 0 
FROM table1 

I read that to much large of data to insert cause the mysql to slow down or timeout. i tried this code on 500k of data in my local computer and it runs well..

is there any way that it will cause a problem if large data will be insert? other ways for faster insert that would not cause the server to use to0 much resources?

  • 写回答

7条回答 默认 最新

  • douketangyouzh5219 2013-09-28 05:59
    关注

    Assuming that you are using InnoDB engine (which is default in most recent MySQL versions), you should simply use transactions: wrap your insert loop into BEGIN; ... COMMIT; block.

    By default, every statement is run as transaction, and server must make sure that data makes it safely to disk before continuing to next statement. If you start transaction, then do many inserts, and then commit transaction, only then server must flush all the data onto the disk. On modern hardware, this could amount only to few disk operations instead of 500k of them.

    Another consideration is to use prepared statements. Server has to parse every SQL statement before executing it. This parsing does not come for free, and it is not unusual that parsing time could be more expensive than actual query execution time. Usually, this parsing is done every single time, and for your case it is done 500k times. If you use prepared statements, parsing/preparation is done only once, and cost to execute statement is only disk write (which is enhanced further if you are within active transaction, because server can batch that write by delaying it until transaction commits).

    Total improvement from using these methods could be dramatic - I have seen situations when using transactions reduced total run time from 30 minutes to 20 seconds.

    This answer should give some idea how to use transactions in PHP with MySQL.

    本回答被题主选为最佳回答 , 对您是否有帮助呢?
    评论
查看更多回答(6条)

报告相同问题?

悬赏问题

  • ¥15 苹果系统的mac m1芯片的笔记本使用ce修改器使用不了
  • ¥15 单相逆变的电压电流双闭环中进行低通滤波PID算法改进
  • ¥15 关于#java#的问题,请各位专家解答!
  • ¥15 如何卸载arcgis 10.1 data reviewer for desktop
  • ¥15 共享文件夹会话中为什么会有WORKGROUP
  • ¥15 关于#python#的问题:使用ATL02数据解算光子脚点的坐标(操作系统-windows)
  • ¥115 关于#python#的问题:未加密前两个软件都可以打开,加密后只有A软件可打开,B软件可以打开但读取不了数据
  • ¥15 在matlab中Application Compiler后的软件无法打开
  • ¥15 想问一下STM32创建工程模板时遇到得问题
  • ¥15 Fiddler抓包443