批量插入Oracle,数据丢失!!!!!???

        String[] insertCompIdarray = insertCompId.split(",");
        String[] ttidarray = ttId.split(",");

        List<TestInsert> tilist = new ArrayList<TestInsert>();

        try{
            for (int i = 0; i < ttidarray.length; i++) {
                for (int j = 0; j < insertCompIdarray.length; j++) {
                    TestInsert ti =new TestInsert();
                    ti.setTicketTypeId(ttidarray[i]);
                    ti.setTicketCodeType(ticketCode);
                    ti.setComponentId(insertCompIdarray[j]);
                    tilist.add(ti);
                }
            }

            final int batchSize = 65500;
            sqlMapClient.startTransaction();
            for (int i = 0 ; i < tilist.size() ; i ++) {  
                if (i % batchSize == 0) {  
                    sqlMapClient.startBatch();  
                }  
                sqlMapClient.insert("EFT_TICKET_TYPE_TICKET_CODE.testInsertTicketCodeRelation", tilist.get(i));  
                if ((i+1) % batchSize ==0) {   
                    sqlMapClient.executeBatch();  
                }  
            }  

            System.out.println("Running test");
            sqlMapClient.executeBatch();  
            sqlMapClient.commitTransaction();
        }
        catch(SQLException e){
            throw e;
        }
        finally{
            sqlMapClient.endTransaction();
        }
 <typeAlias alias = "inserttest" type = "com.hkt.engineering.oneoss.ticketing.domain.TestInsert"/> 
    <insert id="testInsertTicketCodeRelation" parameterClass="inserttest" >
        insert into TICKET_TYPE_TICKET_CODE
            (
            TICKET_TYPE_TICKET_CODE_ID,
            TICKET_TYPE_ID,
            TICKET_CODE_TYPE,
            COMPONENT_ID
            )
            values
            (
            TICKET_TYPE_AND_CODE_ID_SEQ.nextVal,
            #ticketTypeId#,
            #ticketCodeType#,
            #componentId#
            )
    </insert>

Oracle,批量插入,当batchSize设为100000(10W),插入的数据会出现丢失,1018160条数据,结果插入的只有30多万条,batchSize设为65000以下,不会出现数据丢失1018160条数据完全插入
请问这个数据丢失的原因是为什么呢?
还有,在完全插入数据的时间要大概147s,现在想缩短这个时间,请问有什么办法可以减少这个插入时间吗?

2个回答


insert into FUEL_MATERIAL (MATERIAL_HEAT, MATERIAL_NOT_HEAT, MATERIAL_NUMBER, MATERIAL_ID, MATERIAL_CODE, MATERIAL_NAME, MATERIAL_MANUFACTOR, MATERIAL_VALIDDATE, MATERIAL_SULFUR, MATERIAL_NOT_SULFUR,
MATERIAL_GREY, MATERIAL_NOT_GREY, MATERIAL_VOLATILE, MATERIAL_NOT_VOLATILE, MATERIAL_HYDROGEN, MATERIAL_NOT_HYDROGEN, MATERIAL_CREATORID, MATERIAL_CREATORNAME, MATERIAL_CREATEDATE, MATERIAL_COMP_ID,
MATERIAL_COMP_NAME, MATERIAL_FACTORY_ID, MATERIAL_FACTORY_NAME)

(select
#{item.materialHeat,jdbcType=VARCHAR},
#{item.materialNotHeat,jdbcType=VARCHAR},
#{item.materialNumber,jdbcType=VARCHAR},
#{item.materialId,jdbcType=VARCHAR},
#{item.materialCode,jdbcType=VARCHAR},
#{item.materialName,jdbcType=VARCHAR},
#{item.materialManufactor,jdbcType=VARCHAR},
#{item.materialValiddate,jdbcType=TIMESTAMP},
#{item.materialSulfur,jdbcType=VARCHAR},
#{item.materialNotSulfur,jdbcType=VARCHAR},

        #{item.materialGrey,jdbcType=VARCHAR}, 
  #{item.materialNotGrey,jdbcType=VARCHAR}, 
  #{item.materialVolatile,jdbcType=VARCHAR}, 
  #{item.materialNotVolatile,jdbcType=VARCHAR}, 
  #{item.materialHydrogen,jdbcType=VARCHAR}, 
  #{item.materialNotHydrogen,jdbcType=VARCHAR}, 
  #{item.materialCreatorid,jdbcType=VARCHAR}, 
  #{item.materialCreatorname,jdbcType=VARCHAR}, 
  #{item.materialCreatedate,jdbcType=TIMESTAMP}, 
  #{item.materialCompId,jdbcType=VARCHAR}, 

        #{item.materialCompName,jdbcType=VARCHAR}, 
  #{item.materialFactoryId,jdbcType=VARCHAR}, 
  #{item.materialFactoryName,jdbcType=VARCHAR}
from dual)

这是我的批量新增的SQL ,传过来一个List,配置文件自己遍历,插入速度快


insert into FUEL_MATERIAL (MATERIAL_HEAT, MATERIAL_NOT_HEAT, MATERIAL_NUMBER, MATERIAL_ID, MATERIAL_CODE, MATERIAL_NAME, MATERIAL_MANUFACTOR, MATERIAL_VALIDDATE, MATERIAL_SULFUR, MATERIAL_NOT_SULFUR,
MATERIAL_GREY, MATERIAL_NOT_GREY, MATERIAL_VOLATILE, MATERIAL_NOT_VOLATILE, MATERIAL_HYDROGEN, MATERIAL_NOT_HYDROGEN, MATERIAL_CREATORID, MATERIAL_CREATORNAME, MATERIAL_CREATEDATE, MATERIAL_COMP_ID,
MATERIAL_COMP_NAME, MATERIAL_FACTORY_ID, MATERIAL_FACTORY_NAME)

(select
#{item.materialHeat,jdbcType=VARCHAR},
#{item.materialNotHeat,jdbcType=VARCHAR},
#{item.materialNumber,jdbcType=VARCHAR},
#{item.materialId,jdbcType=VARCHAR},
#{item.materialCode,jdbcType=VARCHAR},
#{item.materialName,jdbcType=VARCHAR},
#{item.materialManufactor,jdbcType=VARCHAR},
#{item.materialValiddate,jdbcType=TIMESTAMP},
#{item.materialSulfur,jdbcType=VARCHAR},
#{item.materialNotSulfur,jdbcType=VARCHAR},

        #{item.materialGrey,jdbcType=VARCHAR}, 
  #{item.materialNotGrey,jdbcType=VARCHAR}, 
  #{item.materialVolatile,jdbcType=VARCHAR}, 
  #{item.materialNotVolatile,jdbcType=VARCHAR}, 
  #{item.materialHydrogen,jdbcType=VARCHAR}, 
  #{item.materialNotHydrogen,jdbcType=VARCHAR}, 
  #{item.materialCreatorid,jdbcType=VARCHAR}, 
  #{item.materialCreatorname,jdbcType=VARCHAR}, 
  #{item.materialCreatedate,jdbcType=TIMESTAMP}, 
  #{item.materialCompId,jdbcType=VARCHAR}, 

        #{item.materialCompName,jdbcType=VARCHAR}, 
  #{item.materialFactoryId,jdbcType=VARCHAR}, 
  #{item.materialFactoryName,jdbcType=VARCHAR}
from dual)

Csdn user default icon
上传中...
上传图片
插入图片
抄袭、复制答案,以达到刷声望分或其他目的的行为,在CSDN问答是严格禁止的,一经发现立刻封号。是时候展现真正的技术了!
立即提问
相关内容推荐