错误求解Unknown protocol to job tracker
各位专家,高手,本人小白刚开始学习hadoop,问题求解:虚拟机ubuntu搭建hadoop1.0.4 虚拟机上用命令测试自带的wordcount,能正常运行,
本地机win7,eclipse调试wordcount,出错(我确定端口没有配置错误):
Exception in thread "main" org.apache.hadoop.ipc.RemoteException: java.io.IOException: Unknown protocol to job tracker: org.apache.hadoop.hdfs.protocol.ClientProtocol
at org.apache.hadoop.mapred.JobTracker.getProtocolVersion(JobTracker.java:344)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
at org.apache.hadoop.ipc.Client.call(Client.java:1070)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
at com.sun.proxy.$Proxy1.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1386)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1404)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:254)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:187)
at org.apache.hadoop.mapreduce.lib.input.FileInputFormat.addInputPath(FileInputFormat.java:372)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:75)
求大侠们赐教,快要绝望放弃了........................
自己先顶顶:
eclipse下可以看到HDFS文件系统,也可以直接在eclipse上增删hdfs文件
ubuntu下所有hadoop 的5各进程都确定已正常启动
网上说端口配置错误,仔细核对,绝对没有错的,不知道怎么解决
yangzong913 发表于 2014-12-5 10:15
自己先顶顶:
eclipse下可以看到HDFS文件系统,也可以直接在eclipse上增删hdfs文件
ubuntu下所有hadoop...
求解!!!!往上顶。。。。。。。。。。。 一定要在Job job = new Job(conf, "InvertedIndex"); 之前设置
FileSystem.setDefaultUri(conf, new URI("hdfs://gucas-s2:9000")); //动态设置 dfs uri
建议你使用2.0 把你的配置贴出来看一下。
非常感谢上面两位大侠的关注与回复!!!
我配置的参数不用机器名,改成ip地址可行:
原来是:hdfs://master:9000/user/yangha/input
现在改为:hdfs://192.168.106.141:9000/user/yangha/input
但,出现找不到路径的错误,hdfs总是指向本地d:/workspace/wordcount的路径,然后在Job job = new Job(conf, "word count");
前加上:FileSystem.setDefaultUri(conf, new URI("hdfs://192.168.106.141:9000"));
又来了新的错误:
java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.examples.WordCount$TokenizerMapper
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:867)
at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
java.lang.ClassNotFoundException: org.apache.hadoop.examples.WordCount$TokenizerMapper java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.examples.WordCount$TokenizerMapper
这个错误网上又查遍了资料,说是eclipse版本与hadoop版本不匹配,
我的eclipse是4.4.1 hadoop是1.0.4
真心要绝望了,怎么eclipse运行各例子就这么难 附加上配置,供专家帮我解答,感谢感谢:
core-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://192.168.106.141:9000</value>
<final>true</final>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/yangha/hadoop-1.0.4/tmp</value>
<description>A base for other temporary directories</description>
</property>
</configuration>
maped-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>192.168.106.141:9001</value>
</property>
</configuration>
hdfs-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/yangha/hadoop-1.0.4/name</value>
<final>true</final>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/yangha/hadoop-1.0.4/data</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<final>true</final>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
<description>
If "true", enable permission checking in HDFS.
If "false", permission checking is turned off,
but all other behavior is unchanged.
Switching from one parameter value to the other does not change the mode,
owner or group of files or directories.
</description>
</property>
</configuration>
本人在hadoop服务器ubuntu上装上eclipse3.2.1按照通用的逻辑用eclipse运行wordcount,错误跟在
win7上远程连接hadoop运行wordcout是一样的:
java.lang.RuntimeException: java.lang.ClassNotFoundException: org.apache.hadoop.examples.WordCount$TokenizerMapper
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:867)
at org.apache.hadoop.mapreduce.JobContext.getMapperClass(JobContext.java:199)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:719)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
似乎版本问题解释不通,求解,求解,求解.......... yangzong913 发表于 2014-12-5 15:16
本人在hadoop服务器ubuntu上装上eclipse3.2.1按照通用的逻辑用eclipse运行wordcount,错误跟在
win7上远 ...
确保本地hadoop与服务器hadoop版本一致。也就是你的开发环境hadoop版本要与集群版本一致。
开发环境要么用插件,或则非插件,但是无论哪一种都需要引用hadoop的包,引用的这些包是必须要和集群的包的版本一直。
详细参考
hadoop开发方式总结及操作指导
页:
[1]
2