bin/spark-submit --class com.ghost.scala.AnalysisData --master spark://master:7077 --executor-memory 1G --total-executor-cores 50 /usr/local/modules/test/AnalysisData.jar hdfs://master:8020/usr/data/miniData.txt 1000
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/05/27 01:23:26 INFO SparkContext: Running Spark version 1.5.0
16/05/27 01:23:29 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/05/27 01:23:29 INFO SecurityManager: Changing view acls to: root
16/05/27 01:23:29 INFO SecurityManager: Changing modify acls to: root
16/05/27 01:23:29 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/05/27 01:23:32 INFO Slf4jLogger: Slf4jLogger started
16/05/27 01:23:32 INFO Remoting: Starting remoting
16/05/27 01:23:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.221.130:52011]
16/05/27 01:23:35 INFO Utils: Successfully started service 'sparkDriver' on port 52011.
16/05/27 01:23:35 INFO SparkEnv: Registering MapOutputTracker
16/05/27 01:23:35 INFO SparkEnv: Registering BlockManagerMaster
16/05/27 01:23:36 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-f2a03d79-aacc-4f08-af45-9980091c3dd5
16/05/27 01:23:36 INFO MemoryStore: MemoryStore started with capacity 534.5 MB
16/05/27 01:23:36 INFO HttpFileServer: HTTP File server directory is /tmp/spark-2055fbdb-7523-4d19-9eed-fbe4db06691b/httpd-90dc3883-c57d-4163-9d48-1e8bbc2acaa8
16/05/27 01:23:36 INFO HttpServer: Starting HTTP Server
16/05/27 01:23:37 INFO Utils: Successfully started service 'HTTP file server' on port 34074.
16/05/27 01:23:37 INFO SparkEnv: Registering OutputCommitCoordinator
16/05/27 01:23:37 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/05/27 01:23:37 INFO SparkUI: Started SparkUI at http://192.168.221.130:4040
16/05/27 01:23:40 INFO SparkContext: Added JAR file:/usr/local/modules/test/AnalysisData.jar at http://192.168.221.130:34074/jars/AnalysisData.jar with timestamp 1464283420456
16/05/27 01:23:41 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/05/27 01:23:41 INFO AppClient$ClientEndpoint: Connecting to master spark://master:7077...
16/05/27 01:23:45 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20160527012345-0002
16/05/27 01:23:45 INFO AppClient$ClientEndpoint: Executor added: app-20160527012345-0002/0 on worker-20160526172621-192.168.221.129-44907 (192.168.221.129:44907) with 1 cores
16/05/27 01:23:45 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160527012345-0002/0 on hostPort 192.168.221.129:44907 with 1 cores, 1024.0 MB RAM
16/05/27 01:23:45 INFO AppClient$ClientEndpoint: Executor updated: app-20160527012345-0002/0 is now LOADING
16/05/27 01:23:45 INFO AppClient$ClientEndpoint: Executor updated: app-20160527012345-0002/0 is now RUNNING
16/05/27 01:23:47 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 54641.
16/05/27 01:23:47 INFO NettyBlockTransferService: Server created on 54641
16/05/27 01:23:47 INFO BlockManagerMaster: Trying to register BlockManager
16/05/27 01:23:47 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.221.130:54641 with 534.5 MB RAM, BlockManagerId(driver, 192.168.221.130, 54641)
16/05/27 01:23:47 INFO BlockManagerMaster: Registered BlockManager
16/05/27 01:23:49 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
16/05/27 01:23:53 WARN SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
16/05/27 01:23:53 INFO MemoryStore: ensureFreeSpace(150224) called with curMem=0, maxMem=560497950
16/05/27 01:23:53 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 146.7 KB, free 534.4 MB)
16/05/27 01:23:54 INFO MemoryStore: ensureFreeSpace(14276) called with curMem=150224, maxMem=560497950
16/05/27 01:23:54 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 13.9 KB, free 534.4 MB)
16/05/27 01:23:54 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.221.130:54641 (size: 13.9 KB, free: 534.5 MB)
16/05/27 01:23:54 INFO SparkContext: Created broadcast 0 from textFile at AnalysisData.scala:19
16/05/27 01:23:59 INFO FileInputFormat: Total input paths to process : 1
16/05/27 01:23:59 INFO SparkContext: Starting job: collect at AnalysisData.scala:41
16/05/27 01:23:59 INFO DAGScheduler: Got job 0 (collect at AnalysisData.scala:41) with 2 output partitions
16/05/27 01:23:59 INFO DAGScheduler: Final stage: ResultStage 0(collect at AnalysisData.scala:41)
16/05/27 01:23:59 INFO DAGScheduler: Parents of final stage: List()
16/05/27 01:23:59 INFO DAGScheduler: Missing parents: List()
16/05/27 01:23:59 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[2] at map at AnalysisData.scala:36), which has no missing parents
16/05/27 01:24:00 INFO MemoryStore: ensureFreeSpace(3272) called with curMem=164500, maxMem=560497950
16/05/27 01:24:00 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 3.2 KB, free 534.4 MB)
16/05/27 01:24:00 INFO MemoryStore: ensureFreeSpace(1905) called with curMem=167772, maxMem=560497950
16/05/27 01:24:00 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1905.0 B, free 534.4 MB)
16/05/27 01:24:00 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.221.130:54641 (size: 1905.0 B, free: 534.5 MB)
16/05/27 01:24:00 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
16/05/27 01:24:00 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[2] at map at AnalysisData.scala:36)
16/05/27 01:24:00 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
16/05/27 01:24:03 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.221.129:57783/user/Executor#1310152079]) with ID 0
16/05/27 01:24:03 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.221.129, ANY, 2209 bytes)
16/05/27 01:24:04 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.221.129:53403 with 534.5 MB RAM, BlockManagerId(0, 192.168.221.129, 53403)
16/05/27 01:24:26 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.221.129:53403 (size: 1905.0 B, free: 534.5 MB)
16/05/27 01:24:27 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.221.129:53403 (size: 13.9 KB, free: 534.5 MB)
16/05/27 01:24:32 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, 192.168.221.129, ANY, 2209 bytes)
16/05/27 01:24:33 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, 192.168.221.129): java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;
at com.ghost.scala.AnalysisData$.AnalysisString(AnalysisData.scala:52)
at com.ghost.scala.AnalysisData$$anonfun$1.apply(AnalysisData.scala:36)
at com.ghost.scala.AnalysisData$$anonfun$1.apply(AnalysisData.scala:36)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1839)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1839)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/05/27 01:24:34 ERROR TaskSchedulerImpl: Lost executor 0 on 192.168.221.129: remote Rpc client disassociated
16/05/27 01:24:34 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkExecutor@192.168.221.129:57783] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
16/05/27 01:24:34 INFO TaskSetManager: Re-queueing tasks for 0 from TaskSet 0.0
16/05/27 01:24:34 INFO AppClient$ClientEndpoint: Executor updated: app-20160527012345-0002/0 is now EXITED (Command exited with code 50)
16/05/27 01:24:35 INFO SparkDeploySchedulerBackend: Executor app-20160527012345-0002/0 removed: Command exited with code 50
16/05/27 01:24:35 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, 192.168.221.129): ExecutorLostFailure (executor 0 lost)
16/05/27 01:24:35 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 0
16/05/27 01:24:35 INFO DAGScheduler: Executor lost: 0 (epoch 0)
16/05/27 01:24:35 INFO AppClient$ClientEndpoint: Executor added: app-20160527012345-0002/1 on worker-20160526172621-192.168.221.129-44907 (192.168.221.129:44907) with 1 cores
16/05/27 01:24:35 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160527012345-0002/1 on hostPort 192.168.221.129:44907 with 1 cores, 1024.0 MB RAM
16/05/27 01:24:35 INFO AppClient$ClientEndpoint: Executor updated: app-20160527012345-0002/1 is now LOADING
16/05/27 01:24:35 INFO BlockManagerMasterEndpoint: Trying to remove executor 0 from BlockManagerMaster.
16/05/27 01:24:35 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(0, 192.168.221.129, 53403)
16/05/27 01:24:35 INFO BlockManagerMaster: Removed 0 successfully in removeExecutor
16/05/27 01:24:35 INFO AppClient$ClientEndpoint: Executor updated: app-20160527012345-0002/1 is now RUNNING
16/05/27 01:24:47 INFO SparkDeploySchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@192.168.221.129:33613/user/Executor#237737849]) with ID 1
16/05/27 01:24:47 INFO TaskSetManager: Starting task 1.1 in stage 0.0 (TID 2, 192.168.221.129, ANY, 2209 bytes)
16/05/27 01:24:48 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.221.129:33947 with 534.5 MB RAM, BlockManagerId(1, 192.168.221.129, 33947)
16/05/27 01:25:11 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.221.129:33947 (size: 1905.0 B, free: 534.5 MB)
16/05/27 01:25:12 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.221.129:33947 (size: 13.9 KB, free: 534.5 MB)
16/05/27 01:25:17 INFO TaskSetManager: Starting task 0.1 in stage 0.0 (TID 3, 192.168.221.129, ANY, 2209 bytes)
16/05/27 01:25:17 INFO TaskSetManager: Finished task 1.1 in stage 0.0 (TID 2) in 30067 ms on 192.168.221.129 (1/2)
16/05/27 01:25:17 WARN TaskSetManager: Lost task 0.1 in stage 0.0 (TID 3, 192.168.221.129): java.lang.NoSuchMethodError: scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;
at com.ghost.scala.AnalysisData$.AnalysisString(AnalysisData.scala:52)
at com.ghost.scala.AnalysisData$$anonfun$1.apply(AnalysisData.scala:36)
at com.ghost.scala.AnalysisData$$anonfun$1.apply(AnalysisData.scala:36)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
at org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:905)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1839)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1839)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:88)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
16/05/27 01:25:17 INFO TaskSetManager: Starting task 0.2 in stage 0.0 (TID 4, 192.168.221.129, ANY, 2209 bytes)
16/05/27 01:25:18 INFO TaskSetManager: Lost task 0.2 in stage 0.0 (TID 4) on executor 192.168.221.129: java.lang.NoSuchMethodError (scala.runtime.ObjectRef.create(Ljava/lang/Object;)Lscala/runtime/ObjectRef;) [duplicate 1]
16/05/27 01:25:18 INFO TaskSetManager: Starting task 0.3 in stage 0.0 (TID 5, 192.168.221.129, ANY, 2209 bytes)
16/05/27 01:25:18 ERROR TaskSchedulerImpl: Lost executor 1 on 192.168.221.129: remote Rpc client disassociated
16/05/27 01:25:18 INFO TaskSetManager: Re-queueing tasks for 1 from TaskSet 0.0
16/05/27 01:25:18 WARN TaskSetManager: Lost task 0.3 in stage 0.0 (TID 5, 192.168.221.129): ExecutorLostFailure (executor 1 lost)
16/05/27 01:25:18 WARN ReliableDeliverySupervisor: Association with remote system [akka.tcp://sparkExecutor@192.168.221.129:33613] has failed, address is now gated for [5000] ms. Reason: [Disassociated]
16/05/27 01:25:18 ERROR TaskSetManager: Task 0 in stage 0.0 failed 4 times; aborting job
16/05/27 01:25:18 INFO AppClient$ClientEndpoint: Executor updated: app-20160527012345-0002/1 is now EXITED (Command exited with code 50)
16/05/27 01:25:18 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
16/05/27 01:25:18 INFO SparkDeploySchedulerBackend: Executor app-20160527012345-0002/1 removed: Command exited with code 50
16/05/27 01:25:18 INFO SparkDeploySchedulerBackend: Asked to remove non-existent executor 1
16/05/27 01:25:18 INFO AppClient$ClientEndpoint: Executor added: app-20160527012345-0002/2 on worker-20160526172621-192.168.221.129-44907 (192.168.221.129:44907) with 1 cores
16/05/27 01:25:18 INFO SparkDeploySchedulerBackend: Granted executor ID app-20160527012345-0002/2 on hostPort 192.168.221.129:44907 with 1 cores, 1024.0 MB RAM
16/05/27 01:25:19 INFO AppClient$ClientEndpoint: Executor updated: app-20160527012345-0002/2 is now LOADING
16/05/27 01:25:19 INFO TaskSchedulerImpl: Cancelling stage 0
16/05/27 01:25:19 INFO AppClient$ClientEndpoint: Executor updated: app-20160527012345-0002/2 is now RUNNING
16/05/27 01:25:19 INFO DAGScheduler: ResultStage 0 (collect at AnalysisData.scala:41) failed in 78.565 s
16/05/27 01:25:19 INFO DAGScheduler: Job 0 failed: collect at AnalysisData.scala:41, took 79.459545 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 5, 192.168.221.129): ExecutorLostFailure (executor 1 lost)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1280)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1268)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1267)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1267)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:697)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:697)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1493)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1455)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1444)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:567)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1813)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1826)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1839)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1910)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:905)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:306)
at org.apache.spark.rdd.RDD.collect(RDD.scala:904)
at com.ghost.scala.AnalysisData$.main(AnalysisData.scala:41)
at com.ghost.scala.AnalysisData.main(AnalysisData.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/05/27 01:25:19 INFO DAGScheduler: Executor lost: 1 (epoch 1)
16/05/27 01:25:19 INFO BlockManagerMasterEndpoint: Trying to remove executor 1 from BlockManagerMaster.
16/05/27 01:25:19 INFO BlockManagerMasterEndpoint: Removing block manager BlockManagerId(1, 192.168.221.129, 33947)
16/05/27 01:25:19 INFO BlockManagerMaster: Removed 1 successfully in removeExecutor
16/05/27 01:25:19 INFO SparkContext: Invoking stop() from shutdown hook
16/05/27 01:25:19 INFO SparkUI: Stopped Spark web UI at http://192.168.221.130:4040
16/05/27 01:25:19 INFO DAGScheduler: Stopping DAGScheduler
16/05/27 01:25:20 INFO SparkDeploySchedulerBackend: Shutting down all executors
16/05/27 01:25:20 INFO SparkDeploySchedulerBackend: Asking each executor to shut down
16/05/27 01:25:20 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/05/27 01:25:20 INFO MemoryStore: MemoryStore cleared
16/05/27 01:25:20 INFO BlockManager: BlockManager stopped
16/05/27 01:25:20 INFO BlockManagerMaster: BlockManagerMaster stopped
16/05/27 01:25:20 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/05/27 01:25:20 INFO SparkContext: Successfully stopped SparkContext
16/05/27 01:25:20 INFO ShutdownHookManager: Shutdown hook called
16/05/27 01:25:20 INFO ShutdownHookManager: Deleting directory /tmp/spark-2055fbdb-7523-4d19-9eed-fbe4db06691b
16/05/27 01:25:20 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/05/27 01:25:20 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
看起来像是我内存不够引起的,但是这是个 很小的任务,根本耗不了多少内存,就是转换一个HDFS文件的两行数据。所以很疑惑。
|
|