集群信息如下:
处理了3个txt文件,总大小1.3G左右,想做一个统计关键词出现的次数,在触发shuffle操作时内存溢出
在spark-shell命令下执行,有6个worker每个分配2G内存
以下是执行过程,我想问的是这个内存溢出正常吗?难道1.3G的数据都处理不了?顺便问一下spark能用多少内存处理多大的数据?
[mw_shl_code=scala,true]
scala> val source=sc.textFile("hdfs://node1:9100/user/wzy/sogoudata")
source: org.apache.spark.rdd.RDD[String] = hdfs://node1:9100/user/wzy/sogoudata MapPartitionsRDD[10] at textFile at <console>:24
scala> val key_1=source.map(x=>(x.split("\t")(2),1))
key_1: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[13] at map at <console>:26
scala> key_1.take(3)
res8: Array[(String, Int)] = Array((奇艺高清,1), (凡人修仙传,1), (本本联盟,1))
scala> val key_count=key_1.reduceByKey(_+_)
key_count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[14] at reduceByKey at <console>:28
scala> key_count.take(3)
[Stage 11:======================================> (8 + 4) / 12]16/08/05 14:59:51 WARN TaskSetManager: Lost task 3.0 in stage 11.0 (TID 48, 10.130.152.17): java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at org.apache.hadoop.io.Text.setCapacity(Text.java:266)
at org.apache.hadoop.io.Text.append(Text.java:236)
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:243)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
.......
[/mw_shl_code]
|