本帖最后由 SuperDove 于 2017-2-27 19:48 编辑
[mw_shl_code=applescript,true]17/02/27 19:15:29 INFO ui.SparkUI: Stopped Spark web UI at http://192.168.52.130:4040
17/02/27 19:15:29 INFO cluster.YarnClientSchedulerBackend: Shutting down all executors
17/02/27 19:15:29 INFO cluster.YarnClientSchedulerBackend: Asking each executor to shut down
17/02/27 19:15:29 INFO cluster.YarnClientSchedulerBackend: Stopped
17/02/27 19:15:29 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
17/02/27 19:15:29 INFO storage.MemoryStore: MemoryStore cleared
17/02/27 19:15:29 INFO storage.BlockManager: BlockManager stopped
17/02/27 19:15:29 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
17/02/27 19:15:29 WARN metrics.MetricsSystem: Stopping a MetricsSystem that is not running
17/02/27 19:15:29 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
17/02/27 19:15:29 INFO spark.SparkContext: Successfully stopped SparkContext
Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at com.zcy.data.Peoplecount$.main(Peoplecount.scala:25)
at com.zcy.data.Peoplecount.main(Peoplecount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
17/02/27 19:15:29 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
[/mw_shl_code]
[mw_shl_code=applescript,true]
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: default
start time: 1488194093530
final status: FAILED
tracking URL: http://master:8088/cluster/app/application_1488180355125_0011
user: zcy
17/02/27 19:15:29 INFO yarn.Client: Deleting staging directory .sparkStaging/application_1488180355125_0011
17/02/27 19:15:29 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:530)
at com.zcy.data.Peoplecount$.main(Peoplecount.scala:25)
at com.zcy.data.Peoplecount.main(Peoplecount.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
[/mw_shl_code]
程序代码简单如下[mw_shl_code=applescript,true]val conf = new SparkConf().setAppName("201702271619")
val sc = new SparkContext(conf)//此处是Peoplecount.scala:25 的第25行
val txtrdd = sc.textFile("hdfs://master:8020/tmp/download/peoplecount.txt",59)
sc.parallelize(Array(txtrdd.count())).saveAsTextFile("hdfs://master:8020/tmp/download/allcount")[/mw_shl_code]
使用standone模式能运行成功
[mw_shl_code=applescript,true]spark-submit --master spark://master:7077 --class com.zcy.data.Peoplecount /home/zcy/data/mvn5.jar[/mw_shl_code]
使用spark on yarn 运行失败
[mw_shl_code=applescript,true]spark-submit --master yarn --deploy-mode client --executor-memory 1024M --class com.zcy.data.Peoplecount /home/zcy/data/mvn5.jar[/mw_shl_code]
网上找了一下这种异常,结果是
[mw_shl_code=applescript,true]20、Exception in thread "main" org.apache.spark.SparkException: Yarn application has already ended! It might have been killed or unable to launch application master.
解决方法:yarn-lient模式出现的异常,暂时无解[/mw_shl_code]
spark-env.sh的配置如下
[mw_shl_code=applescript,true]export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=7777 -Dspark.history.retainedApplications=30 -Dspark.history.fs.logDirectory=hdfs://master:8020/tmp/sparklog1.6"
export SPARK_JAR=/usr/spark-1.6.3-bin-hadoop2.6.4/lib/spark-assembly-1.6.3-hadoop2.6.4.jar
export SPARK_MASTER_IP=master
export SPARK_MASTER_PORT=7077
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
[/mw_shl_code]spark-defaults.conf配置如下
[mw_shl_code=applescript,true]spark.eventLog.dir hdfs://master:8020/tmp/sparklog1.6
spark.eventLog.enabled true
spark.ui.port 4040[/mw_shl_code]
请大神们求解
|
|