用spark-submit提交一个程序,如果数据量大一些就经常出现java.io.EOFException错误,数据小也会出现,但还是会执行成功,只要不是四次都失败就可以,所以可能多尝试几次,很奇怪这个错误不一定每次都出现,小数据任务有的时候出现,有的时候不出现,下面日志
18/08/23 16:14:15 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, slave2, executor 1): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:333)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:322)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:443)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:421)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:252)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.api.python.PythonRDD$$anonfun$1.apply(PythonRDD.scala:141)
at org.apache.spark.api.python.PythonRDD$$anonfun$1.apply(PythonRDD.scala:141)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:428)
... 24 more
18/08/23 16:14:15 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 1, master, executor 2, partition 0, NODE_LOCAL, 7878 bytes)
18/08/23 16:14:16 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on master:50015 (size: 6.3 KB, free: 366.3 MB)
18/08/23 16:14:16 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on master:50015 (size: 21.5 KB, free: 366.3 MB)
18/08/23 16:14:17 INFO TaskSetManager: Lost task 0.1 in stage 0.0 (TID 1) on master, executor 2: org.apache.spark.SparkException (Python worker exited unexpectedly (crashed)) [duplicate 1]
18/08/23 16:14:17 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 2, slave2, executor 1, partition 0, NODE_LOCAL, 7878 bytes)
18/08/23 16:14:18 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 2) on slave2, executor 1: org.apache.spark.SparkException (Python worker exited unexpectedly (crashed)) [duplicate 2]
18/08/23 16:14:18 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 3, slave2, executor 1, partition 0, NODE_LOCAL, 7878 bytes)
18/08/23 16:14:18 INFO TaskSetManager: Lost task 0.3 in stage 0.0 (TID 3) on slave2, executor 1: org.apache.spark.SparkException (Python worker exited unexpectedly (crashed)) [duplicate 3]
18/08/23 16:14:18 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
18/08/23 16:14:18 INFO YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool
18/08/23 16:14:18 INFO YarnClusterScheduler: Cancelling stage 0
18/08/23 16:14:18 INFO DAGScheduler: ResultStage 0 (runJob at PythonRDD.scala:141) failed in 5.369 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 3, slave2, executor 1): org.apache.spark.SparkException: Python worker exited unexpectedly (crashed)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:333)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator$$anonfun$1.applyOrElse(PythonRunner.scala:322)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:36)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:443)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:421)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:252)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:59)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:104)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:48)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:310)
at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:302)
at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:289)
at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
at org.apache.spark.api.python.PythonRDD$$anonfun$1.apply(PythonRDD.scala:141)
at org.apache.spark.api.python.PythonRDD$$anonfun$1.apply(PythonRDD.scala:141)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:392)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:428)
... 24 more
Driver stacktrace:
18/08/23 16:14:18 INFO DAGScheduler: Job 0 failed: runJob at PythonRDD.scala:141, took 5.423308 s
18/08/23 16:14:18 ERROR ApplicationMaster: User application exited with status 1
|