求教: 在集群主节点启动spark-shell --master spark://hadoop-master:7077之后 只是简单的运行 val a = sc.parallelize( 1 to 10000).count
报错:15/06/16 09:41:23 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 2.0 failed 4 times, most recent failure: Lost task 2.3 in stage 2.0 (TID 68, hadoop-server): java.io.IOException: org.apache.spark.SparkException: Failed to get broadcast_3_piece0 of broadcast_3
at org.apache.spark.util.Utils$.tryOrIOException(Utils.scala:1156)
|
|