一、循环插入
public void insert(List<User> userList) {
userList.forEach(user -> userDao.insert(user));
}
<insert id="insert">
INSERT INTO `demo`.`user` (`username`, `address`, `remark`, `age`, `create_time`)
VALUES (#{user.username,jdbcType=VARCHAR},
#{user.address,jdbcType=VARCHAR},
#{user.remark,jdbcType=VARCHAR},
#{user.age,jdbcType=INTEGER},
now())
</insert>
二、批量插入
这里是否选择100个为一组还是200或者其他,需要进行多次测试,才能达到最高效率
public void insertBatch(List<User> userList) {
List<List<User>> partition = ListUtil.partition(userList, 100);
for (List<User> users : partition) {
userDao.insertBatch(users);
}
}
<insert id="insertBatch">
INSERT INTO `demo`.`user` (`username`, `address`, `remark`, `age`, `create_time`)
VALUES
<foreach collection="users" index="" item="user" separator=",">
(#{user.username,jdbcType=VARCHAR},
#{user.address,jdbcType=VARCHAR},
#{user.remark,jdbcType=VARCHAR},
#{user.age,jdbcType=INTEGER},
now())
</foreach>
</insert>
三、BatchExecutor插入
mybatis提供了三种sql执行器,分别是SIMPLE(默认),REUSE,BATCH:
-
SIMPLE(SimpleExecutor),相当于JDBC的 PreparedStatement.execute(sql)
执行完毕即关闭即PreparedStatement.close()
-
REUSE(ReuseExecutor),相当于JDBC的 PreparedStatement.execute(sql)
执行完不关闭,而是将PreparedStatement存入 Map<String, Statement>中缓存,其中key为执行的sql模板; -
BATCH(BatchExecutor),相当于JDBC语句的 PreparedStatement.addBatch(sql)
,即仅将执行SQL加入到批量计划但是不真正执行, 所以此时不会执行返回受影响的行数,而只有执行PreparedStatement.execteBatch()
后才会真正执行sql
@Autowired
private SqlSessionFactory sqlSessionFactory;
@Override
public void insertBatchType(List<User> userList) {
SqlSession sqlSession = sqlSessionFactory.openSession(ExecutorType.BATCH);
UserDao mapper = sqlSession.getMapper(UserDao.class);
try {
for (User user : userList) {
mapper.insert(user);
}
sqlSession.commit();
} catch (Exception e) {
System.out.println("批量导入数据异常,事务回滚");
sqlSession.rollback();
} finally {
sqlSession.close();
}
}
四、JDBC插入
当然也可以使用原生的JDBC的方式进行批量插入,使用 statement.addBatch();
的方式,也是很快的
@Resource(name = "dataSource")
private DataSource dataSource;
@Override
public void insertJdbc(List<User> userList) throws SQLException {
Connection connection = null;
PreparedStatement statement = null;
try {
connection = dataSource.getConnection();
connection.setAutoCommit(false);
String sql = "INSERT INTO `user` (`username`, `address`, `remark`, `age`, `create_time`) " +
"VALUES (?,?,?,?,now()) ";
statement = connection.prepareStatement(sql);
for (User user : userList) {
statement.setString(1, user.getUsername());
statement.setString(2, user.getAddress());
statement.setString(3, user.getRemark());
statement.setInt(4, user.getAge());
statement.addBatch();
}
statement.executeBatch();
connection.commit();
} catch (SQLException throwables) {
throwables.printStackTrace();
} finally {
statement.close();
connection.close();
}
}
五、测试效率
准备60w条数据,分别测试以上四种插入方式的效率:
@Test
public void test_for_user() throws SQLException {
List<User> userList = new ArrayList<>();
for (int i = 0; i < 600000; i++) {
User user = new User();
user.setUsername("张三" + i);
user.setAddress("上海" + i);
user.setRemark("备注" + i);
user.setAge(i);
userList.add(user);
}
StopWatch stopWatch = new StopWatch();
stopWatch.start("循环插入");
userService.insert(userList);
stopWatch.stop();
System.out.println(stopWatch.getLastTaskName() + ":" + stopWatch.getLastTaskTimeMillis());
stopWatch.start("批量插入");
userService.insertBatch(userList);
stopWatch.stop();
System.out.println(stopWatch.getLastTaskName() + ":" + stopWatch.getLastTaskTimeMillis());
stopWatch.start("BatchType插入");
userService.insertBatchType(userList);
stopWatch.stop();
System.out.println(stopWatch.getLastTaskName() + ":" + stopWatch.getLastTaskTimeMillis());
stopWatch.start("JDBC-BatchType插入");
userService.insertJdbc(userList);
stopWatch.stop();
System.out.println(stopWatch.getLastTaskName() + ":" + stopWatch.getLastTaskTimeMillis());
}
循环插入:1272111
批量插入:27990
BatchType插入:28143
JDBC-BatchType插入:15976
测试结果还是显而易见的,执行效率从高到低依次是:
JDBC-BatchType插入 > BatchType插入 > 批量插入 > 循环插入
原文始发于微信公众号(一零贰肆):Mybaits常用的批量插入方式你知道几种?效率最高的竟是这个…
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
文章由极客之音整理,本文链接:https://www.bmabk.com/index.php/post/250580.html