表与临时表性能

Table vs Temp Table Performance(表与临时表性能)
本文介绍了表与临时表性能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

对于数百万条记录,哪个更快:永久表临时表?

Which is faster for millions of records: Permanent Table or Temp Tables?

我只需要将它用于 1500 万条记录.处理完成后,我们删除这些记录.

I have to use it only for 15 million records. After processing is complete, we delete these records.

推荐答案

在您的情况下,我们使用称为临时表的永久表.这是大型导入的常用方法.事实上,我们通常使用两个临时表,一个包含原始数据,一个包含清理过的数据,这使得研究提要问题变得更加容易(它们几乎总是我们的客户发现向我们发送垃圾数​​据的新方式和不同方式的结果,但是我们必须能够证明这一点).此外,您还可以避免诸如必须增加临时数据库或给想要使用临时数据库但必须等待它为您增长的其他用户带来问题等问题.

In your situation we use a permanent table called a staging table. This is a common method with large imports. In fact we generally use two staging tables one with the raw data and one with the cleaned up data which makes researching issues with the feed easier (they are almost always a result of new and varied ways our clients find to send us junk data, but we have to be able to prove that). Plus you avoid issues like having to grow temp db or causing issues for other users who want to use temp db but have to wait while it grows for you, etc.

您也可以使用 SSIS 并跳过暂存表,但我发现无需重新加载 50,000,000 表即可返回和研究的能力非常有用.

You can also use SSIS and skip the staging table(s), but I find the ability to go back and research without having to reload a 50,000,000 table is very helpful.

这篇关于表与临时表性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Execute complex raw SQL query in EF6(在EF6中执行复杂的原始SQL查询)
Hibernate reactive No Vert.x context active in aws rds(AWS RDS中的休眠反应性非Vert.x上下文处于活动状态)
Bulk insert with mysql2 and NodeJs throws 500(使用mysql2和NodeJS的大容量插入抛出500)
Flask + PyMySQL giving error no attribute #39;settimeout#39;(FlASK+PyMySQL给出错误,没有属性#39;setTimeout#39;)
auto_increment column for a group of rows?(一组行的AUTO_INCREMENT列?)
Sort by ID DESC(按ID代码排序)