提高 SQLite 的每秒插入性能

Improve INSERT-per-second performance of SQLite(提高 SQLite 的每秒插入性能)
本文介绍了提高 SQLite 的每秒插入性能的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

优化 SQLite 很棘手.C 应用程序的批量插入性能可以从每秒 85 次插入到每秒超过 96,000 次插入!

背景:我们使用 SQLite 作为桌面应用程序的一部分.我们将大量配置数据存储在 XML 文件中,这些数据会被解析并加载到 SQLite 数据库中,以便在应用程序初始化时进行进一步处理.SQLite 非常适合这种情况,因为它速度快,不需要专门的配置,并且数据库作为单个文件存储在磁盘上.

理由: 最初我对所看到的性能感到失望.事实证明,SQLite 的性能可能会有很大差异(对于批量插入和选择) 取决于数据库的配置方式以及您使用 API 的方式.弄清楚所有选项和技术是什么并不是一件容易的事,所以我认为创建这个社区 wiki 条目以与 StackOverflow 读者分享结果是明智的,以便其他人免于相同调查的麻烦.

实验:与其简单地谈论一般意义上的性能技巧(即使用事务!"),我认为最好写一些C 代码并实际测量各种选项的影响.我们将从一些简单的数据开始:

  • 的 28 MB 制表符分隔文本文件(大约 865,000 条记录)多伦多市的完整交通时间表
  • 我的测试机器是运行 Windows XP 的 3.60 GHz P4.
  • 代码使用 Visual C++ 2005 编译为 "发布"全面优化"(/Ox) 和 Favor Fast Code (/Ot).
  • 我使用的是直接编译到我的测试应用程序中的 SQLiteAmalgamation".我碰巧拥有的 SQLite 版本有点旧 (3.6.7),但我怀疑这些结果将与最新版本相当(如果您不这么认为,请发表评论).

让我们写一些代码吧!

代码: 一个简单的 C 程序,它逐行读取文本文件,将字符串拆分为值,然后将数据插入 SQLite 数据库.在这个基线"中版本的代码,创建了数据库,但是我们实际上不会插入数据:

/*****************************************************************用于试验 SQLite 性能的基线代码.输入数据是一个 28 MB 的制表符分隔的文本文件完整的多伦多公交系统时间表/路线信息来自 http://www.toronto.ca/open/datasets/ttc-routes/******************************************************************/#include #include #include #include #include "sqlite3.h";#define INPUTDATA "C:\TTC_schedule_scheduleitem_10-27-2009.txt";#define DATABASE "c:\TTC_schedule_scheduleitem_10-27-2009.sqlite";#define TABLE如果不存在 TTC,则创建表(id INTEGER PRIMARY KEY、Route_ID TEXT、Branch_Code TEXT、Version INTEGER、Stop INTEGER、Vehicle_Index INTEGER、Day Integer、Time TEXT)";#define BUFFER_SIZE 256int main(int argc, char **argv) {sqlite3 * db;sqlite3_stmt * stmt;字符 * sErrMsg = 0;字符 * 尾 = 0;int nRetCode;整数 n = 0;clock_t cStartClock;文件 * p文件;char sInputBuf [BUFFER_SIZE] = "";字符 * sRT = 0;/* 路线 */字符 * sBR = 0;/* 分支 */字符 * sVR = 0;/* 版本 */字符 * sST = 0;/* 停止编号 */字符 * sVI = 0;/* 车辆 */字符 * sDT = 0;/* 日期 */字符 * sTM = 0;/* 时间 */char sSQL [BUFFER_SIZE] = "";/*************************************************//* 打开数据库并创建模式 */sqlite3_open(DATABASE, &db);sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);/*************************************************//* 打开输入文件并导入数据库*/cStartClock = 时钟();pFile = fopen (INPUTDATA,r");而 (!feof(pFile)) {fgets (sInputBuf, BUFFER_SIZE, pFile);sRT = strtok (sInputBuf, "	");/* 获取路线 */sBR = strtok (NULL, "	");/* 获取分支 */sVR = strtok (NULL, "	");/* 获取版本 */sST = strtok (NULL, "	");/* 获取停止编号 */sVI = strtok (NULL, "	");/* 获取车辆 */sDT = strtok (NULL, "	");/* 获取日期 */sTM = strtok (NULL, "	");/* 获取时间 *//* 实际插入将在此处 */n++;}fclose (pFile);printf(在 %4.2f 秒内导入了 %d 条记录
", n, (clock() - cStartClock)/(double)CLOCKS_PER_SEC);sqlite3_close(db);返回0;}


控制"

按原样运行代码实际上不会执行任何数据库操作,但它会让我们了解原始 C 文件 I/O 和字符串处理操作的速度.

<块引用>

在 0.94 中导入了 864913 条记录秒

太好了!我们每秒可以执行 920,000 次插入,前提是我们实际上不执行任何插入 :-)


最坏情况"

我们将使用从文件中读取的值生成 SQL 字符串,并使用 sqlite3_exec 调用该 SQL 操作:

sprintf(sSQL, "INSERT INTO TTC VALUES (NULL, '%s', '%s', '%s', '%s', '%s', '%s', '%s')", sRT, sBR, sVR, sST, sVI, sDT, sTM);sqlite3_exec(db, sSQL, NULL, NULL, &sErrMsg);

这会很慢,因为每次插入都会将 SQL 编译成 VDBE 代码,并且每次插入都将发生在自己的事务中.有多慢?

<块引用>

在 9933.61 中导入了 864913 条记录秒

哎呀!2小时45分钟!这只是每秒 85 次插入.

使用事务

默认情况下,SQLite 将评估唯一事务中的每个 INSERT/UPDATE 语句.如果执行大量插入,建议将您的操作包装在事务中:

sqlite3_exec(db, BEGIN TRANSACTION", NULL, NULL, &sErrMsg);pFile = fopen (INPUTDATA,r");而 (!feof(pFile)) {...}fclose (pFile);sqlite3_exec(db, END TRANSACTION", NULL, NULL, &sErrMsg);

<块引用>

38.03 导入 864913 条记录秒

这样更好.只需将我们所有的插入都封装在一个事务中,我们的性能就提高到了 每秒 23,000 次插入.

使用准备好的语句

使用事务是一个巨大的改进,但如果我们一遍又一遍地使用相同的 SQL,为每个插入重新编译 SQL 语句没有意义.让我们使用 sqlite3_prepare_v2 编译一次我们的 SQL 语句,然后使用 sqlite3_bind_text 将我们的参数绑定到该语句:

/* 打开输入文件并导入数据库 */cStartClock = 时钟();sprintf(sSQL,插入到 TTC 值(NULL,@RT,@BR,@VR,@ST,@VI,@DT,@TM)");sqlite3_prepare_v2(db, sSQL, BUFFER_SIZE, &stmt, &tail);sqlite3_exec(db, BEGIN TRANSACTION", NULL, NULL, &sErrMsg);pFile = fopen (INPUTDATA,r");而 (!feof(pFile)) {fgets (sInputBuf, BUFFER_SIZE, pFile);sRT = strtok (sInputBuf, "	");/* 获取路线 */sBR = strtok (NULL, "	");/* 获取分支 */sVR = strtok (NULL, "	");/* 获取版本 */sST = strtok (NULL, "	");/* 获取停止编号 */sVI = strtok (NULL, "	");/* 获取车辆 */sDT = strtok (NULL, "	");/* 获取日期 */sTM = strtok (NULL, "	");/* 获取时间 */sqlite3_bind_text(stmt, 1, sRT, -1, SQLITE_TRANSIENT);sqlite3_bind_text(stmt, 2, sBR, -1, SQLITE_TRANSIENT);sqlite3_bind_text(stmt, 3, sVR, -1, SQLITE_TRANSIENT);sqlite3_bind_text(stmt, 4, sST, -1, SQLITE_TRANSIENT);sqlite3_bind_text(stmt, 5, sVI, -1, SQLITE_TRANSIENT);sqlite3_bind_text(stmt, 6, sDT, -1, SQLITE_TRANSIENT);sqlite3_bind_text(stmt, 7, sTM, -1, SQLITE_TRANSIENT);sqlite3_step(stmt);sqlite3_clear_bindings(stmt);sqlite3_reset(stmt);n++;}fclose (pFile);sqlite3_exec(db, END TRANSACTION", NULL, NULL, &sErrMsg);printf(在 %4.2f 秒内导入了 %d 条记录
", n, (clock() - cStartClock)/(double)CLOCKS_PER_SEC);sqlite3_finalize(stmt);sqlite3_close(db);返回0;

<块引用>

16.27 导入 864913 条记录秒

不错!还有一点代码(不要忘记调用sqlite3_clear_bindingssqlite3_reset),但我们的性能提高了一倍多,达到每秒53,000 次插入.

PRAGMA 同步 = 关闭

默认情况下,SQLite 将在发出操作系统级写入命令后暂停.这保证了数据被写入磁盘.通过设置 synchronous = OFF,我们指示 SQLite 简单地将数据传递给操作系统进行写入,然后继续.如果计算机在将数据写入盘片之前发生灾难性崩溃(或电源故障),则数据库文件可能会损坏:

/* 打开数据库并创建模式 */sqlite3_open(DATABASE, &db);sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);sqlite3_exec(db, PRAGMA synchronous = OFF", NULL, NULL, &sErrMsg);

<块引用>

在 12.41 中导入了 864913 条记录秒

改进现在较小,但我们的速度高达 每秒 69,600 次插入.

PRAGMA journal_mode = MEMORY

考虑通过评估 PRAGMA journal_mode = MEMORY 将回滚日志存储在内存中.您的交易会更快,但如果您在交易期间断电或程序崩溃,您的数据库可能会因部分完成的交易而处于损坏状态:

/* 打开数据库并创建模式 */sqlite3_open(DATABASE, &db);sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);sqlite3_exec(db, PRAGMA journal_mode = MEMORY", NULL, NULL, &sErrMsg);

<块引用>

在 13.50 导入了 864913 条记录秒

比之前的优化稍慢,每秒 64,000 次插入.

PRAGMA synchronous = OFF and PRAGMA journal_mode = MEMORY

让我们结合前两个优化.这有点冒险(在崩溃的情况下),但我们只是导入数据(而不是经营银行):

/* 打开数据库并创建模式 */sqlite3_open(DATABASE, &db);sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);sqlite3_exec(db, PRAGMA synchronous = OFF", NULL, NULL, &sErrMsg);sqlite3_exec(db, PRAGMA journal_mode = MEMORY", NULL, NULL, &sErrMsg);

<块引用>

12.00 导入 864913 条记录秒

太棒了!我们能够执行每秒 72,000 次插入.

使用内存数据库

只是为了踢球,让我们以之前的所有优化为基础并重新定义数据库文件名,以便我们完全在 RAM 中工作:

#define DATABASE ":memory:";

<块引用>

在 10.94 中导入了 864913 条记录秒

将我们的数据库存储在 RAM 中并不是非常实用,但令人印象深刻的是我们每秒可以执行 79,000 次插入.

重构 C 代码

虽然不是特别针对 SQLite 的改进,但我不喜欢 while 循环中额外的 char* 赋值操作.让我们快速重构该代码,将 strtok() 的输出直接传递给 sqlite3_bind_text(),并让编译器尝试为我们加快速度:

pFile = fopen (INPUTDATA,r");而 (!feof(pFile)) {fgets (sInputBuf, BUFFER_SIZE, pFile);sqlite3_bind_text(stmt, 1, strtok (sInputBuf, "	"), -1, SQLITE_TRANSIENT);/* 获取路线 */sqlite3_bind_text(stmt, 2, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);/* 获取分支 */sqlite3_bind_text(stmt, 3, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);/* 获取版本 */sqlite3_bind_text(stmt, 4, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);/* 获取停止编号 */sqlite3_bind_text(stmt, 5, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);/* 获取车辆 */sqlite3_bind_text(stmt, 6, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);/* 获取日期 */sqlite3_bind_text(stmt, 7, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);/* 获取时间 */sqlite3_step(stmt);/* 执行 SQL 语句 */sqlite3_clear_bindings(stmt);/* 清除绑定 */sqlite3_reset(stmt);/* 重置 VDBE */n++;}fclose (pFile);

注意:我们又回到使用真实的数据库文件了.内存数据库速度很快,但不一定实用

<块引用>

在8.94中导入了864913条记录秒

对参数绑定中使用的字符串处理代码进行轻微重构,使我们能够执行 每秒 96,700 次插入.我认为可以肯定地说,这非常快.当我们开始调整其他变量(即页面大小、索引创建等)时,这将成为我们的基准.


总结(到目前为止)

我希望你仍然和我在一起!我们开始这条路的原因是 SQLite 的批量插入性能差异很大,并且需要对哪些更改进行更改并不总是很明显加快我们的运作.使用相同的编译器(和编译器选项)、相同版本的 SQLite 和相同的数据,我们优化了代码和 SQLite 的使用,以从每秒 85 次插入的最坏情况到超过 96,000 次插入每秒!


先创建索引然后插入 vs. 插入然后创建索引

在开始测量 SELECT 性能之前,我们知道我们将创建索引.在下面的一个答案中建议,在进行批量插入时,插入数据后创建索引会更快(而不是先创建索引然后插入数据).让我们试试:

创建索引然后插入数据

sqlite3_exec(db, "CREATE INDEX 'TTC_Stop_Index' ON 'TTC' ('Stop')", NULL, NULL, &sErrMsg);sqlite3_exec(db, BEGIN TRANSACTION", NULL, NULL, &sErrMsg);...

<块引用>

18.13 导入 864913 条记录秒

插入数据然后创建索引

<代码>...sqlite3_exec(db, END TRANSACTION", NULL, NULL, &sErrMsg);sqlite3_exec(db, CREATE INDEX 'TTC_Stop_Index' ON 'TTC' ('Stop')", NULL, NULL, &sErrMsg);

<块引用>

在 13.66 中导入了 864913 条记录秒

正如预期的那样,如果对一列进行索引,批量插入会变慢,但如果在插入数据后创建索引,则确实有所不同.我们的无索引基线是每秒 96,000 次插入.先创建索引,然后插入数据,每秒插入 47,700 次,而先插入数据,然后创建索引,每秒插入 63,300 次.


我很乐意接受其他场景的建议来尝试......并且很快就会为 SELECT 查询编译类似的数据.

解决方案

几个提示:

  1. 将插入/更新放入事务中.
  2. 对于旧版本的 SQLite - 考虑一个不那么偏执的日志模式 (pragma journal_mode).有NORMAL,然后有OFF,如果您不太担心操作系统崩溃时数据库可能会损坏,它可以显着提高插入速度.如果您的应用程序崩溃,数据应该没问题.请注意,在较新版本中,OFF/MEMORY 设置对于应用程序级别的崩溃并不安全.
  3. 调整页面大小也会有所不同(PRAGMA page_size).由于较大的页面保存在内存中,因此具有较大的页面大小可以使读取和写入速度更快.请注意,更多内存将用于您的数据库.
  4. 如果您有索引,请考虑在完成所有插入后调用 CREATE INDEX.这比创建索引然后进行插入要快得多.
  5. 如果您同时访问 SQLite,则必须非常小心,因为在写入完成时整个数据库都被锁定,尽管可能有多个读取器,但写入将被锁定.通过在较新的 SQLite 版本中添加 WAL,这一点有所改善.
  6. 利用节省空间的优势……较小的数据库运行速度更快.例如,如果您有键值对,请尽可能将键设为 INTEGER PRIMARY KEY,这将替换表中隐含的唯一行号列.
  7. 如果您使用多线程,您可以尝试使用共享页面缓存,它将允许在线程之间共享加载的页面,这可以避免昂贵的 I/O 调用.
  8. 不要使用!feof(file)

我也问过类似的问题 此处 和 这里.

Optimizing SQLite is tricky. Bulk-insert performance of a C application can vary from 85 inserts per second to over 96,000 inserts per second!

Background: We are using SQLite as part of a desktop application. We have large amounts of configuration data stored in XML files that are parsed and loaded into an SQLite database for further processing when the application is initialized. SQLite is ideal for this situation because it's fast, it requires no specialized configuration, and the database is stored on disk as a single file.

Rationale: Initially I was disappointed with the performance I was seeing. It turns-out that the performance of SQLite can vary significantly (both for bulk-inserts and selects) depending on how the database is configured and how you're using the API. It was not a trivial matter to figure out what all of the options and techniques were, so I thought it prudent to create this community wiki entry to share the results with Stack Overflow readers in order to save others the trouble of the same investigations.

The Experiment: Rather than simply talking about performance tips in the general sense (i.e. "Use a transaction!"), I thought it best to write some C code and actually measure the impact of various options. We're going to start with some simple data:

  • A 28 MB TAB-delimited text file (approximately 865,000 records) of the complete transit schedule for the city of Toronto
  • My test machine is a 3.60 GHz P4 running Windows XP.
  • The code is compiled with Visual C++ 2005 as "Release" with "Full Optimization" (/Ox) and Favor Fast Code (/Ot).
  • I'm using the SQLite "Amalgamation", compiled directly into my test application. The SQLite version I happen to have is a bit older (3.6.7), but I suspect these results will be comparable to the latest release (please leave a comment if you think otherwise).

Let's write some code!

The Code: A simple C program that reads the text file line-by-line, splits the string into values and then inserts the data into an SQLite database. In this "baseline" version of the code, the database is created, but we won't actually insert data:

/*************************************************************
    Baseline code to experiment with SQLite performance.

    Input data is a 28 MB TAB-delimited text file of the
    complete Toronto Transit System schedule/route info
    from http://www.toronto.ca/open/datasets/ttc-routes/

**************************************************************/
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <string.h>
#include "sqlite3.h"

#define INPUTDATA "C:\TTC_schedule_scheduleitem_10-27-2009.txt"
#define DATABASE "c:\TTC_schedule_scheduleitem_10-27-2009.sqlite"
#define TABLE "CREATE TABLE IF NOT EXISTS TTC (id INTEGER PRIMARY KEY, Route_ID TEXT, Branch_Code TEXT, Version INTEGER, Stop INTEGER, Vehicle_Index INTEGER, Day Integer, Time TEXT)"
#define BUFFER_SIZE 256

int main(int argc, char **argv) {

    sqlite3 * db;
    sqlite3_stmt * stmt;
    char * sErrMsg = 0;
    char * tail = 0;
    int nRetCode;
    int n = 0;

    clock_t cStartClock;

    FILE * pFile;
    char sInputBuf [BUFFER_SIZE] = "";

    char * sRT = 0;  /* Route */
    char * sBR = 0;  /* Branch */
    char * sVR = 0;  /* Version */
    char * sST = 0;  /* Stop Number */
    char * sVI = 0;  /* Vehicle */
    char * sDT = 0;  /* Date */
    char * sTM = 0;  /* Time */

    char sSQL [BUFFER_SIZE] = "";

    /*********************************************/
    /* Open the Database and create the Schema */
    sqlite3_open(DATABASE, &db);
    sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);

    /*********************************************/
    /* Open input file and import into Database*/
    cStartClock = clock();

    pFile = fopen (INPUTDATA,"r");
    while (!feof(pFile)) {

        fgets (sInputBuf, BUFFER_SIZE, pFile);

        sRT = strtok (sInputBuf, "	");     /* Get Route */
        sBR = strtok (NULL, "	");            /* Get Branch */
        sVR = strtok (NULL, "	");            /* Get Version */
        sST = strtok (NULL, "	");            /* Get Stop Number */
        sVI = strtok (NULL, "	");            /* Get Vehicle */
        sDT = strtok (NULL, "	");            /* Get Date */
        sTM = strtok (NULL, "	");            /* Get Time */

        /* ACTUAL INSERT WILL GO HERE */

        n++;
    }
    fclose (pFile);

    printf("Imported %d records in %4.2f seconds
", n, (clock() - cStartClock) / (double)CLOCKS_PER_SEC);

    sqlite3_close(db);
    return 0;
}


The "Control"

Running the code as-is doesn't actually perform any database operations, but it will give us an idea of how fast the raw C file I/O and string processing operations are.

Imported 864913 records in 0.94 seconds

Great! We can do 920,000 inserts per second, provided we don't actually do any inserts :-)


The "Worst-Case-Scenario"

We're going to generate the SQL string using the values read from the file and invoke that SQL operation using sqlite3_exec:

sprintf(sSQL, "INSERT INTO TTC VALUES (NULL, '%s', '%s', '%s', '%s', '%s', '%s', '%s')", sRT, sBR, sVR, sST, sVI, sDT, sTM);
sqlite3_exec(db, sSQL, NULL, NULL, &sErrMsg);

This is going to be slow because the SQL will be compiled into VDBE code for every insert and every insert will happen in its own transaction. How slow?

Imported 864913 records in 9933.61 seconds

Yikes! 2 hours and 45 minutes! That's only 85 inserts per second.

Using a Transaction

By default, SQLite will evaluate every INSERT / UPDATE statement within a unique transaction. If performing a large number of inserts, it's advisable to wrap your operation in a transaction:

sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &sErrMsg);

pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {

    ...

}
fclose (pFile);

sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &sErrMsg);

Imported 864913 records in 38.03 seconds

That's better. Simply wrapping all of our inserts in a single transaction improved our performance to 23,000 inserts per second.

Using a Prepared Statement

Using a transaction was a huge improvement, but recompiling the SQL statement for every insert doesn't make sense if we using the same SQL over-and-over. Let's use sqlite3_prepare_v2 to compile our SQL statement once and then bind our parameters to that statement using sqlite3_bind_text:

/* Open input file and import into the database */
cStartClock = clock();

sprintf(sSQL, "INSERT INTO TTC VALUES (NULL, @RT, @BR, @VR, @ST, @VI, @DT, @TM)");
sqlite3_prepare_v2(db,  sSQL, BUFFER_SIZE, &stmt, &tail);

sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &sErrMsg);

pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {

    fgets (sInputBuf, BUFFER_SIZE, pFile);

    sRT = strtok (sInputBuf, "	");   /* Get Route */
    sBR = strtok (NULL, "	");        /* Get Branch */
    sVR = strtok (NULL, "	");        /* Get Version */
    sST = strtok (NULL, "	");        /* Get Stop Number */
    sVI = strtok (NULL, "	");        /* Get Vehicle */
    sDT = strtok (NULL, "	");        /* Get Date */
    sTM = strtok (NULL, "	");        /* Get Time */

    sqlite3_bind_text(stmt, 1, sRT, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 2, sBR, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 3, sVR, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 4, sST, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 5, sVI, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 6, sDT, -1, SQLITE_TRANSIENT);
    sqlite3_bind_text(stmt, 7, sTM, -1, SQLITE_TRANSIENT);

    sqlite3_step(stmt);

    sqlite3_clear_bindings(stmt);
    sqlite3_reset(stmt);

    n++;
}
fclose (pFile);

sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &sErrMsg);

printf("Imported %d records in %4.2f seconds
", n, (clock() - cStartClock) / (double)CLOCKS_PER_SEC);

sqlite3_finalize(stmt);
sqlite3_close(db);

return 0;

Imported 864913 records in 16.27 seconds

Nice! There's a little bit more code (don't forget to call sqlite3_clear_bindings and sqlite3_reset), but we've more than doubled our performance to 53,000 inserts per second.

PRAGMA synchronous = OFF

By default, SQLite will pause after issuing a OS-level write command. This guarantees that the data is written to the disk. By setting synchronous = OFF, we are instructing SQLite to simply hand-off the data to the OS for writing and then continue. There's a chance that the database file may become corrupted if the computer suffers a catastrophic crash (or power failure) before the data is written to the platter:

/* Open the database and create the schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, &sErrMsg);

Imported 864913 records in 12.41 seconds

The improvements are now smaller, but we're up to 69,600 inserts per second.

PRAGMA journal_mode = MEMORY

Consider storing the rollback journal in memory by evaluating PRAGMA journal_mode = MEMORY. Your transaction will be faster, but if you lose power or your program crashes during a transaction you database could be left in a corrupt state with a partially-completed transaction:

/* Open the database and create the schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA journal_mode = MEMORY", NULL, NULL, &sErrMsg);

Imported 864913 records in 13.50 seconds

A little slower than the previous optimization at 64,000 inserts per second.

PRAGMA synchronous = OFF and PRAGMA journal_mode = MEMORY

Let's combine the previous two optimizations. It's a little more risky (in case of a crash), but we're just importing data (not running a bank):

/* Open the database and create the schema */
sqlite3_open(DATABASE, &db);
sqlite3_exec(db, TABLE, NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA synchronous = OFF", NULL, NULL, &sErrMsg);
sqlite3_exec(db, "PRAGMA journal_mode = MEMORY", NULL, NULL, &sErrMsg);

Imported 864913 records in 12.00 seconds

Fantastic! We're able to do 72,000 inserts per second.

Using an In-Memory Database

Just for kicks, let's build upon all of the previous optimizations and redefine the database filename so we're working entirely in RAM:

#define DATABASE ":memory:"

Imported 864913 records in 10.94 seconds

It's not super-practical to store our database in RAM, but it's impressive that we can perform 79,000 inserts per second.

Refactoring C Code

Although not specifically an SQLite improvement, I don't like the extra char* assignment operations in the while loop. Let's quickly refactor that code to pass the output of strtok() directly into sqlite3_bind_text(), and let the compiler try to speed things up for us:

pFile = fopen (INPUTDATA,"r");
while (!feof(pFile)) {

    fgets (sInputBuf, BUFFER_SIZE, pFile);

    sqlite3_bind_text(stmt, 1, strtok (sInputBuf, "	"), -1, SQLITE_TRANSIENT); /* Get Route */
    sqlite3_bind_text(stmt, 2, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);    /* Get Branch */
    sqlite3_bind_text(stmt, 3, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);    /* Get Version */
    sqlite3_bind_text(stmt, 4, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);    /* Get Stop Number */
    sqlite3_bind_text(stmt, 5, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);    /* Get Vehicle */
    sqlite3_bind_text(stmt, 6, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);    /* Get Date */
    sqlite3_bind_text(stmt, 7, strtok (NULL, "	"), -1, SQLITE_TRANSIENT);    /* Get Time */

    sqlite3_step(stmt);        /* Execute the SQL Statement */
    sqlite3_clear_bindings(stmt);    /* Clear bindings */
    sqlite3_reset(stmt);        /* Reset VDBE */

    n++;
}
fclose (pFile);

Note: We are back to using a real database file. In-memory databases are fast, but not necessarily practical

Imported 864913 records in 8.94 seconds

A slight refactoring to the string processing code used in our parameter binding has allowed us to perform 96,700 inserts per second. I think it's safe to say that this is plenty fast. As we start to tweak other variables (i.e. page size, index creation, etc.) this will be our benchmark.


Summary (so far)

I hope you're still with me! The reason we started down this road is that bulk-insert performance varies so wildly with SQLite, and it's not always obvious what changes need to be made to speed-up our operation. Using the same compiler (and compiler options), the same version of SQLite and the same data we've optimized our code and our usage of SQLite to go from a worst-case scenario of 85 inserts per second to over 96,000 inserts per second!


CREATE INDEX then INSERT vs. INSERT then CREATE INDEX

Before we start measuring SELECT performance, we know that we'll be creating indices. It's been suggested in one of the answers below that when doing bulk inserts, it is faster to create the index after the data has been inserted (as opposed to creating the index first then inserting the data). Let's try:

Create Index then Insert Data

sqlite3_exec(db, "CREATE  INDEX 'TTC_Stop_Index' ON 'TTC' ('Stop')", NULL, NULL, &sErrMsg);
sqlite3_exec(db, "BEGIN TRANSACTION", NULL, NULL, &sErrMsg);
...

Imported 864913 records in 18.13 seconds

Insert Data then Create Index

...
sqlite3_exec(db, "END TRANSACTION", NULL, NULL, &sErrMsg);
sqlite3_exec(db, "CREATE  INDEX 'TTC_Stop_Index' ON 'TTC' ('Stop')", NULL, NULL, &sErrMsg);

Imported 864913 records in 13.66 seconds

As expected, bulk-inserts are slower if one column is indexed, but it does make a difference if the index is created after the data is inserted. Our no-index baseline is 96,000 inserts per second. Creating the index first then inserting data gives us 47,700 inserts per second, whereas inserting the data first then creating the index gives us 63,300 inserts per second.


I'd gladly take suggestions for other scenarios to try... And will be compiling similar data for SELECT queries soon.

解决方案

Several tips:

  1. Put inserts/updates in a transaction.
  2. For older versions of SQLite - Consider a less paranoid journal mode (pragma journal_mode). There is NORMAL, and then there is OFF, which can significantly increase insert speed if you're not too worried about the database possibly getting corrupted if the OS crashes. If your application crashes the data should be fine. Note that in newer versions, the OFF/MEMORY settings are not safe for application level crashes.
  3. Playing with page sizes makes a difference as well (PRAGMA page_size). Having larger page sizes can make reads and writes go a bit faster as larger pages are held in memory. Note that more memory will be used for your database.
  4. If you have indices, consider calling CREATE INDEX after doing all your inserts. This is significantly faster than creating the index and then doing your inserts.
  5. You have to be quite careful if you have concurrent access to SQLite, as the whole database is locked when writes are done, and although multiple readers are possible, writes will be locked out. This has been improved somewhat with the addition of a WAL in newer SQLite versions.
  6. Take advantage of saving space...smaller databases go faster. For instance, if you have key value pairs, try making the key an INTEGER PRIMARY KEY if possible, which will replace the implied unique row number column in the table.
  7. If you are using multiple threads, you can try using the shared page cache, which will allow loaded pages to be shared between threads, which can avoid expensive I/O calls.
  8. Don't use !feof(file)!

I've also asked similar questions here and here.

这篇关于提高 SQLite 的每秒插入性能的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Execute complex raw SQL query in EF6(在EF6中执行复杂的原始SQL查询)
Hibernate reactive No Vert.x context active in aws rds(AWS RDS中的休眠反应性非Vert.x上下文处于活动状态)
auto_increment column for a group of rows?(一组行的AUTO_INCREMENT列?)
Sort by ID DESC(按ID代码排序)
SSIS: Model design issue causing duplications - can two fact tables be connected?(SSIS:模型设计问题导致重复-两个事实表可以连接吗?)
SQL to Generate Periodic Snapshots from Transactions Table(用于从事务表生成定期快照的SQL)