分块、处理和在 Pandas/Python 中合并数据集

Chunking, processing amp; merging dataset in Pandas/Python(分块、处理和在 Pandas/Python 中合并数据集)
本文介绍了分块、处理和在 Pandas/Python 中合并数据集的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有一个大数据集,包含一个字符串.我只想通过 read_fwf 使用宽度打开它,如下所示:

There is a large dataset, containing a strings. I just want to open it via read_fwf using widths, like this:

widths = [3, 7, ..., 9, 7]
tp = pandas.read_fwf(file, widths=widths, header=None)

这将有助于我标记数据,但系统崩溃(适用于 nrows=20000).然后我决定按块(例如 20000 行)来做,像这样:

It would help me to mark the data, But the system crashes (works with nrows=20000). Then I decided to do it by chunk (e.g. 20000 rows), like this:

cs = 20000
for chunk in pd.read_fwf(file, widths=widths, header=None, chunksize=ch)
...:  <some code using chunk>

我的问题是:在对块进行一些处理(标记行、删除或修改列)之后,我应该如何在循环中将块合并(连接?)回到 .csv 文件中?还是有别的办法?

My question is: what should I do in a loop to merge (concatenate?) the chunks back in a .csv file after some processing of chunk (marking the row, dropping or modyfiing the column)? Or there is another way?

推荐答案

我会假设自从阅读了整个文件

I'm going to assume that since reading the entire file

tp = pandas.read_fwf(file, widths=widths, header=None)

失败,但分块读取有效,文件太大而无法一次读取,并且您遇到了 MemoryError.

fails but reading in chunks works, that the file is too big to be read at once and that you encountered a MemoryError.

在这种情况下,如果您可以分块处理数据,然后将结果连接到 CSV,您可以使用 chunk.to_csv 将 CSV 写入块:

In that case, if you can process the data in chunks, then to concatenate the results in a CSV, you could use chunk.to_csv to write the CSV in chunks:

filename = ...
for chunk in pd.read_fwf(file, widths=widths, header=None, chunksize=ch)
    # process the chunk
    chunk.to_csv(filename, mode='a')

注意 mode='a' 以追加模式打开文件,这样每个chunk.to_csv 调用被附加到同一个文件中.

Note that mode='a' opens the file in append mode, so that the output of each chunk.to_csv call is appended to the same file.

这篇关于分块、处理和在 Pandas/Python 中合并数据集的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Leetcode 234: Palindrome LinkedList(Leetcode 234:回文链接列表)
How do I read an Excel file directly from Dropbox#39;s API using pandas.read_excel()?(如何使用PANDAS.READ_EXCEL()直接从Dropbox的API读取Excel文件?)
subprocess.Popen tries to write to nonexistent pipe(子进程。打开尝试写入不存在的管道)
I want to realize Popen-code from Windows to Linux:(我想实现从Windows到Linux的POpen-code:)
Reading stdout from a subprocess in real time(实时读取子进程中的标准输出)
How to call type safely on a random file in Python?(如何在Python中安全地调用随机文件上的类型?)