求教:运行/bin/hadoop fs -ls 命令 出现错误 ! 急
我们使用的是全分布式,主节点和各个节点之间通信良好,可以通过主节点启动整个框架,但是为什么在主节点运行以下命令会出现错误呢吗?谢谢!# bin/hadoop fs -ls
11/08/21 08:15:51 INFO ipc.Client: Retrying connect to server: /202.206.249.200:9000. Already tried 0 time(s).
11/08/21 08:15:52 INFO ipc.Client: Retrying connect to server: /202.206.249.200:9000. Already tried 1 time(s).
11/08/21 08:15:53 INFO ipc.Client: Retrying connect to server: /202.206.249.200:9000. Already tried 2 time(s).
11/08/21 08:15:54 INFO ipc.Client: Retrying connect to server: /202.206.249.200:9000. Already tried 3 time(s).
11/08/21 08:15:55 INFO ipc.Client: Retrying connect to server: /202.206.249.200:9000. Already tried 4 time(s).
11/08/21 08:15:56 INFO ipc.Client: Retrying connect to server: /202.206.249.200:9000. Already tried 5 time(s).
11/08/21 08:15:57 INFO ipc.Client: Retrying connect to server: /202.206.249.200:9000. Already tried 6 time(s). 重启一下再试试 我也出现过这种情况,dataNode连接不上namendoe,关闭防火墙:service iptables stop 遇到这种,首先要确认202.206.249.200:9000是否已经起来,在202.206.249.200上执行netstat -lpnt看下即可,如果确认已经起来,则要找找是不是路由或防火墙的,可在执行hadoop ls的机器上执行telnet 202.206.249.200 9000试试。
如果是端口(进程)没有起来,则得看看202.206.249.200上的日志了,如果不是,则要解决网络了,比如关闭防火墙,或添加路由等。 解决了 是/etc/hosts的原因里面有好几个ip对应着localhost 所以就出错了
页:
[1]