在部署hbase-1.1.4的时候启动出了错误:
日志的错误信息:
2016-04-21 04:27:05,504 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class org.apache.hadoop.hbase.master.HMaster
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2341)
at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:233)
at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2355)
Caused by: java.net.UnknownHostException: zcq
at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:258)
at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:153)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:602)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:547)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:139)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:89)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2625)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2607)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:368)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.hbase.util.FSUtils.getRootDir(FSUtils.java:1002)
at org.apache.hadoop.hbase.regionserver.HRegionServer.<init>(HRegionServer.java:564)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:365)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:2336)
... 5 more
[hadoop@h102 logs]$
这是我的hbase-site.xml的配置:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://zcq/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>h101:2181,h102:2181,h103:2181</value>
</property>
</configuration>
这是我hadoop的core-site.xml的配置:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://zcq/</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/app/hadoop-2.7.1/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>h101:2181,h102:2181,h103:2181</value>
</property>
</configuration>
这是我hadoop的hdfs-site.xml的配置:
<configuration>
<!--指定hdfs的nameservice为zcq,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>zcq</value>
</property>
<!-- zcq下面有两个NameNode,分别是nn1,nn2 -->
<property>
<name>dfs.ha.namenodes.zcq</name>
<value>nn1,nn2</value>
</property>
<!-- nn1的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.zcq.nn1</name>
<value>h101:9000</value>
</property>
<!-- nn1的http通信地址 -->
<property>
<name>dfs.namenode.http-address.zcq.nn1</name>
<value>h101:50070</value>
</property>
<!-- nn2的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.zcq.nn2</name>
<value>h102:9000</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.zcq.nn2</name>
<value>h102:50070</value>
</property>
<!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://h101:8485;h102:8485;h103:8485/zcq</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/app/hadoop-2.7.1/journaldata</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.ns1</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<!-- 指定HDFS副本的数量 -->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
我已经把hadoop的core-site.xml和hdfs-site.xml拷贝到hbase的conf目录下了:
[hadoop@h102 conf]$ ll
total 48
-rw-rw-r--. 1 hadoop hadoop 1204 Apr 20 07:28 core-site.xml
-rw-r--r--. 1 hadoop hadoop 1811 Mar 16 20:55 hadoop-metrics2-hbase.properties
-rw-r--r--. 1 hadoop hadoop 4537 Mar 16 20:55 hbase-env.cmd
-rw-r--r--. 1 hadoop hadoop 7474 Apr 20 07:41 hbase-env.sh
-rw-r--r--. 1 hadoop hadoop 2257 Mar 16 20:55 hbase-policy.xml
-rw-r--r--. 1 hadoop hadoop 1330 Apr 20 07:58 hbase-site.xml
-rw-rw-r--. 1 hadoop hadoop 3007 Apr 20 07:28 hdfs-site.xml
-rw-r--r--. 1 hadoop hadoop 4339 Mar 16 20:55 log4j.properties
-rw-r--r--. 1 hadoop hadoop 25 Apr 20 07:29 regionservers
为什么还会出这个错误呢?有知道的大神吗?
|
|