本帖最后由 yuwenge 于 2015-8-18 19:21 编辑
hadoop yarn 解决方案:
1.提交作业方式
[mw_shl_code=bash,true]$ export HADOOP_CLASSPATH="/home/cluster/apps/hbase/lib/hbase-protocol-0.98.1-cdh5.1.0.jar"
$ ./hadoop-2.2.0/bin/hadoop --config /home/stack/conf_hadoop/ jar ./hbase/hbase-assembly/target/hbase-0.99.0-SNAPSHOT-job.jar org.apache.hadoop.hbase.mapreduce.RowCounter usertable[/mw_shl_code]
2.增加HADOOP_CLASSPATH到linux环境变量中
增加如下内容到bashrc 或者 bash_profile 或者 profile ,这样是linux环境变量中就行
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:/home/cluster/apps/hbase/lib/hbase-protocol-0.98.1-cdh5.1.0.jar
spark 解决方案:
1.提交作业方式
-conf 增加spark.driver.extraClassPath & spark.executor.extraClassPath
[mw_shl_code=bash,true]spark-submit --class com.umeng.dp.yuliang.play.HBaseToES --master yarn-cluster --conf "spark.driver.extraClassPath=/home/cluster/apps/hbase/lib/hbase-protocol-0.98.1-cdh5.1.0.jar" --conf "spark.executor.extraClassPath=/home/cluster/apps/hbase/lib/hbase-protocol-0.98.1-cdh5.1.0.jar" --jars /home/cluster/apps/hbase/lib/hbase-protocol-0.98.1-cdh5.1.0.jar ScalaMR-0.0.1-jar-with-dependencies.jar
[/mw_shl_code]
2.增加如下配置到$SPARK_HOME/conf/spark-defaults.conf文件
[mw_shl_code=bash,true]spark.driver.extraClassPath /home/cluster/apps/hbase/lib/hbase-protocol-0.98.1-cdh5.1.0.jar
spark.executor.extraClassPath /home/cluster/apps/hbase/lib/hbase-protocol-0.98.1-cdh5.1.0.jar[/mw_shl_code]
参考:stark_summer
英文解决方案:
https://issues.apache.org/jira/browse/HBASE-10304
|