spark从mysql并行读取数据

spark reading data from mysql in parallel(spark从mysql并行读取数据)
本文介绍了spark从mysql并行读取数据的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在尝试从 mysql 读取数据并将其写回 s3 中具有特定分区的 parquet 文件,如下所示:

Im trying to read data from mysql and write it back to parquet file in s3 with specific partitions as follows:

df=sqlContext.read.format('jdbc')\
   .options(driver='com.mysql.jdbc.Driver',url="""jdbc:mysql://<host>:3306/<>db?user=<usr>&password=<pass>""",
         dbtable='tbl',
         numPartitions=4 )\
   .load()


df2=df.withColumn('updated_date',to_date(df.updated_at))
df2.write.parquet(path='s3n://parquet_location',mode='append',partitionBy=['updated_date'])

我的问题是它只打开一个到 mysql 的连接(而不是 4 个),并且在它从 mysql 获取所有数据之前它不会写入 parquert,因为我在 mysql 中的表很大(100M 行)进程失败内存不足.

My problem is that it open only one connection to mysql (instead of 4) and it doesn't write to parquert until it fetches all the data from mysql, because my table in mysql is huge (100M rows) the process failed on OutOfMemory.

有没有办法配置Spark打开多个mysql连接并将部分数据写入parquet?

Is there a way to configure Spark to open more than one connection to mysql and to write partial data to parquet?

推荐答案

你应该设置这些属性:

partitionColumn, 
lowerBound, 
upperBound, 
numPartitions

正如这里记录的那样:http://spark.apache.org/docs/latest/sql-programming-guide.html#jdbc-to-other-databases

这篇关于spark从mysql并行读取数据的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本站部分内容来源互联网,如果有图片或者内容侵犯您的权益请联系我们删除!

相关文档推荐

Hibernate reactive No Vert.x context active in aws rds(AWS RDS中的休眠反应性非Vert.x上下文处于活动状态)
Bulk insert with mysql2 and NodeJs throws 500(使用mysql2和NodeJS的大容量插入抛出500)
Flask + PyMySQL giving error no attribute #39;settimeout#39;(FlASK+PyMySQL给出错误,没有属性#39;setTimeout#39;)
auto_increment column for a group of rows?(一组行的AUTO_INCREMENT列?)
Sort by ID DESC(按ID代码排序)
SQL/MySQL: split a quantity value into multiple rows by date(SQL/MySQL:按日期将数量值拆分为多行)