合理管理依赖包呢??为什么会有这个问题
他的问题 看我回复 应该是他想要的答案
mark一下 那年夏天110 发表于 2016-1-14 17:44
他的问题 看我回复 应该是他想要的答案
嗯嗯,thanks
这个问题也困扰了我:(我添加了jars包,还是报错)
spark-1.5.0-bin-hadoop2.6]$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi \
> --jars/opt/BigData/inst/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar \
> --master yarn \
> --deploy-mode client \
> --driver-memory 1g \
> --executor-memory 1g \
> --executor-cores 1 \
> --queue thequeue \
> lib/spark-examples*.jar \
> 2
错误信息:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.conf.Configuration.addDeprecations([Lorg/apache/hadoop/conf/Configuration$DeprecationDelta;)V
at org.apache.hadoop.yarn.conf.YarnConfiguration.addDeprecatedKeys(YarnConfiguration.java:79)
at org.apache.hadoop.yarn.conf.YarnConfiguration.<clinit>(YarnConfiguration.java:73)
at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.newConfiguration(YarnSparkHadoopUtil.scala:61)
at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:52)
at org.apache.spark.deploy.yarn.YarnSparkHadoopUtil.<init>(YarnSparkHadoopUtil.scala:46)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at java.lang.Class.newInstance(Class.java:374)
at org.apache.spark.deploy.SparkHadoopUtil$.liftedTree1$1(SparkHadoopUtil.scala:389)
at org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:387)
at org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
at org.apache.spark.util.Utils$.getSparkOrYarnConfig(Utils.scala:2042)
at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:97)
at org.apache.spark.storage.BlockManager.<init>(BlockManager.scala:173)
at org.apache.spark.SparkEnv$.create(SparkEnv.scala:345)
at org.apache.spark.SparkEnv$.createDriverEnv(SparkEnv.scala:193)
at org.apache.spark.SparkContext.createSparkEnv(SparkContext.scala:276)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:441)
at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:29)
at org.apache.spark.examples.SparkPi.main(SparkPi.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
能帮忙看看不?谢谢! mobilefactory8 发表于 2016-2-24 16:52
这个问题也困扰了我:(我添加了jars包,还是报错)
spark-1.5.0-bin-hadoop2.6]$ ./bin/spark-submit...
不会无缘无故报错的。
先看路径是否正确
在看看包是否正确
b). 允许失败的节点数(spark.yarn.max.worker.failures)为executor数量的两倍数量,最小为3.
请问楼主,这个怎么算出来的? b). 允许失败的节点数(spark.yarn.max.worker.failures)为executor数量的两倍数量,最小为3.
请问楼主,这个怎么算出来的?一般允许失败节点数为总节点数的一半,这个如何理解呢?
页:
1
[2]