问题描述
简而言之:一张包含超过 1600 万条记录 [2GB 大小] 的表.当使用 ORDER BY *primary_key* 时,SELECT 的 LIMIT 偏移量越大,查询就越慢
Scenario in short: A table with more than 16 million records [2GB in size]. The higher LIMIT offset with SELECT, the slower the query becomes, when using ORDER BY *primary_key*
所以
SELECT * FROM large ORDER BY `id` LIMIT 0, 30
花费远远少于
SELECT * FROM large ORDER BY `id` LIMIT 10000, 30
那只订购 30 条记录,无论如何都是一样的.所以这不是 ORDER BY 的开销.
现在,当获取最新的 30 行时,大约需要 180 秒.如何优化这个简单的查询?
That only orders 30 records and same eitherway. So it's not the overhead from ORDER BY.
Now when fetching the latest 30 rows it takes around 180 seconds. How can I optimize that simple query?
推荐答案
较高的偏移量会减慢查询速度是正常的,因为查询需要计算第一个 OFFSET + LIMIT
记录(并取只有 LIMIT
个).该值越高,查询运行的时间越长.
It's normal that higher offsets slow the query down, since the query needs to count off the first OFFSET + LIMIT
records (and take only LIMIT
of them). The higher is this value, the longer the query runs.
查询不能直接到 OFFSET
因为,首先,记录的长度可能不同,其次,删除的记录可能存在间隙.它需要在途中检查和统计每条记录.
The query cannot go right to OFFSET
because, first, the records can be of different length, and, second, there can be gaps from deleted records. It needs to check and count each record on its way.
假设 id
是 MyISAM 表的主键,或者是 InnoDB 表上唯一的非主键字段,你可以使用这个技巧来加速它:
Assuming that id
is the primary key of a MyISAM table, or a unique non-primary key field on an InnoDB table, you can speed it up by using this trick:
SELECT t.*
FROM (
SELECT id
FROM mytable
ORDER BY
id
LIMIT 10000, 30
) q
JOIN mytable t
ON t.id = q.id
见这篇文章:
- MySQL ORDERBY/LIMIT 性能:延迟行查找
这篇关于为什么 MYSQL 更高的 LIMIT 偏移量会减慢查询速度?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!