环境 centos7 + hadoop 2.6
ssh 各种连接都没有问题 master 连 slave slave 连master
防火墙:selinux iptables 都关闭
hosts 设置没有问题 127.0.0.1 都去掉,3台虚拟机都只留
192.168.1.200 master
192.168.1.201 slave1
192.168.1.202 slave2
-------------------------------------------------------
start-dfs.sh
进程都启动成功
master:
65456 Jps
64881 NameNode
65057 DataNode
65276 SecondaryNameNode
slave:
3607 DataNode
3675 Jps
-----------------------------------------------------
hadoop下:
slaves文件设置:
master
slave1
slave2
----------------------------------------------
netstat -anp|grep 9000
tcp 0 0 192.168.1.200:9000 0.0.0.0:* LISTEN 64881/java
tcp 0 0 192.168.1.200:9000 192.168.1.200:42846 ESTABLISHED 64881/java
tcp 0 0 192.168.1.200:42853 192.168.1.200:9000 TIME_WAIT -
tcp 0 0 192.168.1.200:42846 192.168.1.200:9000 ESTABLISHED 65057/java
-----------------------------------------------
问题描述:
在http://master:50070/ 监控页面下。
live nodes 为1.
只有master 的 datanode 可以看到,另外两台slave 虽有进程但无法连接到master.
而且在 slave 下 dfs/data 下 都没有生成 current 文件
查看日志为:
2015-08-22 21:44:18,358 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to master/192.168.1.200:9000 starting to offer service
2015-08-22 21:44:18,369 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2015-08-22 21:44:18,369 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
-------------从下面开始就有问题了---------------------------------------------------
2015-08-22 21:44:19,478 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.200:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2015-08-22 21:44:20,479 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: master/192.168.1.200:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
我对过master 下 datanode 的日志,分割线上面都一样,下面就出问题。
hadoop namenode -format 清过几遍
dfs/name dfs/data logs/ tmp/ 都清了。还是不行
查询历史帖子,相同的现象,我都看过,但还是无法解决。
配置文件为:core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
<description>The name of the default file system</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hs/hadoop/tmp</value>
<description>A base of other temporary directories</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>8192</value>
</property>
hdfs-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
<description>The name of the default file system</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hs/hadoop/tmp</value>
<description>A base of other temporary directories</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>8192</value>
</property>
请各位帮忙看看。不胜感激。
|
|