<!-- HA Configuration Basic -->
<property>
<name>dfs.nameservices</name>
<value>test-hadoop-cluster</value>
</property>
<property>
<name>dfs.ha.namenodes.test-hadoop-cluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.test-hadoop-cluster.nn1</name>
<value>sh-b0-sv0375.sh.idc.yunyun.com:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.test-hadoop-cluster.nn2</name>
<value>sh-b0-sv0376.sh.idc.yunyun.com:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.test-hadoop-cluster.nn1</name>
<value>sh-b0-sv0375.sh.idc.yunyun.com:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.test-hadoop-cluster.nn2</name>
<value>sh-b0-sv0376.sh.idc.yunyun.com:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://sh-b0-sv0376.sh.idc.yunyun.com:8485;sh-b0-sv0372.sh.idc.yunyun.com:8485;sh-d0-sv0481.sh.idc.yunyun.com:8485/test-hadoop-cluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.test-hadoop-cluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/test-hadoop/.ssh/id_rsa</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/mnt/disk0/test-hadoop/data/journal/test-hadoop-cluster</value> </property> <!-- HA Configuration Automatic Failover Zookeeper --> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>sh-b0-sv0376.sh.idc.yunyun.com:2182,sh-b0-sv0372.sh.idc.yunyun.com:2182,sh-d0-sv0481.sh.idc.yunyun.com:2182</value> </property> <!-- HDFS Configuration Basic--> <property> <name>dfs.namenode.name.dir</name> <value>/mnt/disk0/test-hadoop/data/namenode/</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>100</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/mnt/disk0/test-hadoop/data/datanode/</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.hosts.exclude</name> <value>${hadoop.distribution.config.path}/excludes</value> </property>
这些是配置。
sh-b0-sv0375.sh.idc.yunyun.com (nn1)这个是最原始的namenode节点,只有在这个namenode上可以动态删除节点,
删除的时候,我是先将要删除的节点添加到${hadoop.distribution.config.path}/excludes这个文件中,然后在sh-b0-sv0375.sh.idc.yunyun.com这个上面 执行
hdfs dfsadmin -refreshNodes 这个命令。 这个是可以的。但是只有在这个namenode上看到那个节点退役了,但是在standby节点上看那个删除的节点还在。。。
hdfs haadmin -failover nn2 nn1 我是这么切换的 HA。(并且 启动的时候 默认nn2 是 active的 ,我想指定让nn1是active可以么?)
当sh-b0-sv0376.sh.idc.yunyun.com (nn2)是active的namenode的时候,在执行上面的动态删除节点不起作用。
|