为什么 MYSQL 更高的 LIMIT 偏移量会减慢查询速度?

Why does MYSQL higher LIMIT offset slow the query down?(为什么 MYSQL 更高的 LIMIT 偏移量会减慢查询速度?)
本文介绍了为什么 MYSQL 更高的 LIMIT 偏移量会减慢查询速度?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

简而言之:一张包含超过 1600 万条记录 [2GB 大小] 的表.当使用 ORDER BY *primary_key* 时,SELECT 的 LIMIT 偏移量越大,查询就越慢

Scenario in short: A table with more than 16 million records [2GB in size]. The higher LIMIT offset with SELECT, the slower the query becomes, when using ORDER BY *primary_key*

所以

SELECT * FROM large ORDER BY `id`  LIMIT 0, 30 

花费远远少于

SELECT * FROM large ORDER BY `id` LIMIT 10000, 30 

那只订购 30 条记录,无论如何都是一样的.所以这不是 ORDER BY 的开销.
现在,当获取最新的 30 行时,大约需要 180 秒.如何优化这个简单的查询?

That only orders 30 records and same eitherway. So it's not the overhead from ORDER BY.
Now when fetching the latest 30 rows it takes around 180 seconds. How can I optimize that simple query?

推荐答案

较高的偏移量会减慢查询速度是正常的,因为查询需要计算第一个 OFFSET + LIMIT 记录(并取只有 LIMIT 个).该值越高,查询运行的时间越长.

It's normal that higher offsets slow the query down, since the query needs to count off the first OFFSET + LIMIT records (and take only LIMIT of them). The higher is this value, the longer the query runs.

查询不能直接到 OFFSET 因为,首先,记录的长度可能不同,其次,删除的记录可能存在间隙.它需要在途中检查和统计每条记录.

The query cannot go right to OFFSET because, first, the records can be of different length, and, second, there can be gaps from deleted records. It needs to check and count each record on its way.

假设 id 是 MyISAM 表的主键,或者是 InnoDB 表上唯一的非主键字段,你可以使用这个技巧来加速它:

Assuming that id is the primary key of a MyISAM table, or a unique non-primary key field on an InnoDB table, you can speed it up by using this trick:

SELECT  t.* 
FROM    (
        SELECT  id
        FROM    mytable
        ORDER BY
                id
        LIMIT 10000, 30
        ) q
JOIN    mytable t
ON      t.id = q.id

见这篇文章:

  • MySQL ORDERBY/LIMIT 性能:延迟行查找

这篇关于为什么 MYSQL 更高的 LIMIT 偏移量会减慢查询速度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Hibernate reactive No Vert.x context active in aws rds(AWS RDS中的休眠反应性非Vert.x上下文处于活动状态)
Bulk insert with mysql2 and NodeJs throws 500(使用mysql2和NodeJS的大容量插入抛出500)
Flask + PyMySQL giving error no attribute #39;settimeout#39;(FlASK+PyMySQL给出错误,没有属性#39;setTimeout#39;)
auto_increment column for a group of rows?(一组行的AUTO_INCREMENT列?)
Sort by ID DESC(按ID代码排序)
SQL/MySQL: split a quantity value into multiple rows by date(SQL/MySQL:按日期将数量值拆分为多行)