本帖最后由 Kevin517 于 2016-11-5 21:11 编辑
哪位大神帮忙讲解一下使用 Eclipse 提交作业到hadoop集群中。
只知道在 Eclipse 中把 Hadoop 的 xml 配置文件放到项目里面,然后打个 JAR 包。
再运行 Java Appalachian 。
因为没有成功,网上没有找到能可以理解的帖子。恕本人愚钝,还望大家指点。
本人有几点疑惑:
1、打 JAR 包前,配置应该怎么写。下面这一块的
Configuration conf = new Configuration();
//conf.set("fs.defaultFS", "hdfs://192.168.100.16:9000");
conf.set("mapred.jar", "D:\\pagerank.jar");
Job job = Job.getInstance(conf);
job.setJarByClass(ToHeavy.class);
2、要不要再项目目录中放 Hadoop 的 .xml 配置文件
core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml
3、在 JAR Export 的 Select the resources to export: 中要勾选哪些内容
4、网卡看有人说 要指明 Main class 入口。是怎么说?
5、打好的 JAR 包 ,可以上传到服务器,通过命令行执行吗? 如果不能的话,通过命令行执行的那个 JAR 包的生成方式跟这个的区别在哪里?
谢谢
=================================================================
下面贴出我的错误
2016-11-05 20:45:14,809 INFO Configuration.deprecation (Configuration.java:warnOnceIfDeprecated(1173)) - session.id is deprecated. Instead, use dfs.metrics.session-id
2016-11-05 20:45:14,814 INFO jvm.JvmMetrics (JvmMetrics.java:init(76)) - Initializing JVM Metrics with processName=JobTracker, sessionId=
2016-11-05 20:45:15,356 WARN mapreduce.JobResourceUploader (JobResourceUploader.java:uploadFiles(64)) - Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
2016-11-05 20:45:15,443 INFO mapreduce.JobSubmitter (JobSubmitter.java:submitJobInternal(252)) - Cleaning up the staging area file:/F:/Kevin/Bigdata/Hadoop_Study/build/test/mapred/staging/Kevin1247070378/.staging/job_local1247070378_0001
Exception in thread "main" org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: file:/ToHeavy/input
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:323)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:265)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:387)
at org.apache.hadoop.mapreduce.JobSubmitter.writeNewSplits(JobSubmitter.java:304)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:321)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:199)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1290)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1287)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Unknown Source)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)
at com.sxt.my.heavy.ToHeavy.main(ToHeavy.java:79)
|
|