加速 mysql 转储和导入

Speeding up mysql dumps and imports(加速 mysql 转储和导入)
本文介绍了加速 mysql 转储和导入的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

是否有任何记录在案的技术来加速 mySQL 转储和导入?

Are there any documented techniques for speeding up mySQL dumps and imports?

这将包括 my.cnf 设置、使用 ramdisks 等.

This would include my.cnf settings, using ramdisks, etc.

只寻找记录在案的技术,最好有显示潜在加速的基准.

Looking only for documented techniques, preferably with benchmarks showing potential speed-up.

推荐答案

http://www.maatkit.org/ 有一个 mk-parallel-dump 和 mk-parallel-restore

http://www.maatkit.org/ has a mk-parallel-dump and mk-parallel-restore

如果您一直希望使用多线程 mysqldump,请不要再希望了.此工具并行转储 MySQL 表.它是一个更智能的 mysqldump,既可以作为 mysqldump 的包装器(具有合理的默认行为),也可以作为 SELECT INTO OUTFILE 的包装器.它专为处理非常大数据的高性能应用程序而设计,其中速度非常重要.它利用多个 CPU 和磁盘来更快地转储您的数据.

If you’ve been wishing for multi-threaded mysqldump, wish no more. This tool dumps MySQL tables in parallel. It is a much smarter mysqldump that can either act as a wrapper for mysqldump (with sensible default behavior) or as a wrapper around SELECT INTO OUTFILE. It is designed for high-performance applications on very large data sizes, where speed matters a lot. It takes advantage of multiple CPUs and disks to dump your data much faster.

mysqldump 中还有各种潜在的选项,例如在导入转储时不创建索引 - 而是在完成时将它们集中起来.

There are also various potential options in mysqldump such as not making indexes while the dump is being imported - but instead doing them en-mass on the completion.

这篇关于加速 mysql 转储和导入的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Hibernate reactive No Vert.x context active in aws rds(AWS RDS中的休眠反应性非Vert.x上下文处于活动状态)
Bulk insert with mysql2 and NodeJs throws 500(使用mysql2和NodeJS的大容量插入抛出500)
Flask + PyMySQL giving error no attribute #39;settimeout#39;(FlASK+PyMySQL给出错误,没有属性#39;setTimeout#39;)
auto_increment column for a group of rows?(一组行的AUTO_INCREMENT列?)
Sort by ID DESC(按ID代码排序)
SQL/MySQL: split a quantity value into multiple rows by date(SQL/MySQL:按日期将数量值拆分为多行)