分享

如何使用Eclipse构建HBase开发环境

hyj 2014-3-17 14:30:54 发表于 实操演练 [显示全部楼层] 回帖奖励 阅读模式 关闭右栏 25 157844
构建要点:
1.需要添加那个文件的下的jar包?
2.hbase-site.xml如何处理?
3.运行前将Hbase集群先启动,为什么?

已有(25)人评论

跳转到指定楼层
hyj 发表于 2014-3-17 14:35:23
本帖最后由 hyj 于 2014-3-17 14:57 编辑

Eclipse版本:Kepler Service Release 1
操作系统:Ubuntu13.04
Hadoop:2.2.0
HBase:0.96.0
编译可参考:
hbase与hadoop2.X在CentOS6.4下源码编译

1:从HBase集群中复制一份Hbase部署文件,放置在开发端某一目录下(如在/app/hadoop/hbase096目录下)。
2:在eclipse里新建一个java项目HBase,然后选择项目属性,在Libraries->Add External JARs...,然后选择/app/hadoop/hbase096/lib下相关的JAR包,如果只是测试用的话,就简单一点,将所有的JAR选上。
1.png




2.png





3:在项目HBase下增加一个文件夹conf,将Hbase集群的配置文件hbase-site.xml复制到该目录,然后选择项目属性在Libraries->Add Class Folder,将刚刚增加的conf目录选上。


3.png




4.png



  

5.png




4:在HBase项目中增加一个chapter12的package,然后增加一个HBaseTestCase的class,然后将陆嘉恒老师的《Hadoop实战第2版》12章的代码复制进去,做适当的修改,代码如下:
  1. package chapter12;
  2. import java.io.IOException;
  3. import org.apache.hadoop.conf.Configuration;
  4. import org.apache.hadoop.hbase.HBaseConfiguration;
  5. import org.apache.hadoop.hbase.HColumnDescriptor;
  6. import org.apache.hadoop.hbase.HTableDescriptor;
  7. import org.apache.hadoop.hbase.client.Get;
  8. import org.apache.hadoop.hbase.client.HBaseAdmin;
  9. import org.apache.hadoop.hbase.client.HTable;
  10. import org.apache.hadoop.hbase.client.Put;
  11. import org.apache.hadoop.hbase.client.Result;
  12. import org.apache.hadoop.hbase.client.ResultScanner;
  13. import org.apache.hadoop.hbase.client.Scan;
  14. import org.apache.hadoop.hbase.util.Bytes;
  15. public class HBaseTestCase {           
  16.     //声明静态配置 HBaseConfiguration
  17.     static Configuration cfg=HBaseConfiguration.create();
  18.     //创建一张表,通过HBaseAdmin HTableDescriptor来创建
  19.     public static void creat(String tablename,String columnFamily) throws Exception {
  20.         HBaseAdmin admin = new HBaseAdmin(cfg);
  21.         if (admin.tableExists(tablename)) {
  22.             System.out.println("table Exists!");
  23.             System.exit(0);
  24.         }
  25.         else{
  26.             HTableDescriptor tableDesc = new HTableDescriptor(tablename);
  27.             tableDesc.addFamily(new HColumnDescriptor(columnFamily));
  28.             admin.createTable(tableDesc);
  29.             System.out.println("create table success!");
  30.         }
  31.     }
  32.   
  33.     //添加一条数据,通过HTable Put为已经存在的表来添加数据
  34.     public static void put(String tablename,String row, String columnFamily,String column,String data) throws Exception {
  35.         HTable table = new HTable(cfg, tablename);
  36.         Put p1=new Put(Bytes.toBytes(row));
  37.         p1.add(Bytes.toBytes(columnFamily), Bytes.toBytes(column), Bytes.toBytes(data));
  38.         table.put(p1);
  39.         System.out.println("put '"+row+"','"+columnFamily+":"+column+"','"+data+"'");
  40.     }
  41.    
  42.    public static void get(String tablename,String row) throws IOException{
  43.             HTable table=new HTable(cfg,tablename);
  44.             Get g=new Get(Bytes.toBytes(row));
  45.                 Result result=table.get(g);
  46.                 System.out.println("Get: "+result);
  47.     }
  48.     //显示所有数据,通过HTable Scan来获取已有表的信息
  49.     public static void scan(String tablename) throws Exception{
  50.          HTable table = new HTable(cfg, tablename);
  51.          Scan s = new Scan();
  52.          ResultScanner rs = table.getScanner(s);
  53.          for(Result r:rs){
  54.              System.out.println("Scan: "+r);
  55.          }
  56.     }
  57.    
  58.     public static boolean delete(String tablename) throws IOException{
  59.             
  60.             HBaseAdmin admin=new HBaseAdmin(cfg);
  61.             if(admin.tableExists(tablename)){
  62.                     try
  63.                     {
  64.                             admin.disableTable(tablename);
  65.                             admin.deleteTable(tablename);
  66.                     }catch(Exception ex){
  67.                             ex.printStackTrace();
  68.                             return false;
  69.                     }
  70.                     
  71.             }
  72.             return true;
  73.     }
  74.   
  75.     public static void  main (String [] agrs) {
  76.             String tablename="hbase_tb";
  77.         String columnFamily="cf";
  78.          
  79.             try {                     
  80.             HBaseTestCase.creat(tablename, columnFamily);
  81.             HBaseTestCase.put(tablename, "row1", columnFamily, "cl1", "data");
  82.             HBaseTestCase.get(tablename, "row1");
  83.             HBaseTestCase.scan(tablename);
  84. /*           if(true==HBaseTestCase.delete(tablename))
  85.                     System.out.println("Delete table:"+tablename+"success!");
  86. */           
  87.         }
  88.         catch (Exception e) {
  89.             e.printStackTrace();
  90.         }   
  91. }
  92. }
复制代码
5:设置运行配置,然后运行。运行前将Hbase集群先启动。


6.png




6:检验,使用hbase shell查看hbase,发现已经建立表hbase_tb。


7.png














回复

使用道具 举报

nettman 发表于 2014-3-17 15:12:34


这里补充一下相关内容:

版本:hadoop-1.2.1,hbase-0.94.12,zookeeper-3.4.5

建立一个java项目,名字随意,需要的jar包如下图,在项目里面添加一个文件夹并设成class forder。

20131103102740703.jpg

protobuf 这个jar包是google的Message 相关包,不要忘记


//一下是相关参考


我们在用 java 操作 HBase  时,可能会出现相关的 ClassNotFoundException  等异常信息,但是我们又不想把 HBase lib 下的所有jar包全部导入到工程,因为会有很多用不到的jar包。

在此将 HBase 做相关总结


1、只需要用 java api 对 HBase 表进行增删改时,必须用的以下 jar包


commons-configuration-1.6.jar
commons-lang-2.5.jar
commons-logging-1.1.1.jar
hadoop-core-1.2.1.jar
hbase-0.94.10.jar
log4j-1.2.16.jar
protobuf-java-2.4.0a.jar
slf4j-api-1.4.3.jar
slf4j-log4j12-1.4.3.jar
zookeeper-3.4.5.jar






2、运行 HBase MapReduce 程序时,需要以下jar包


commons-configuration-1.6.jar
commons-lang-2.5.jar
commons-logging-1.1.1.jar
guava-11.0.2.jar
hadoop-core-1.2.1.jar
hbase-0.94.10.jar
jackson-core-asl-1.8.8.jar
jackson-mapper-asl-1.8.8.jar
log4j-1.2.16.jar
protobuf-java-2.4.0a.jar
slf4j-api-1.4.3.jar
slf4j-log4j12-1.4.3.jar
zookeeper-3.4.5.jar


3、关于ClassNotFoundException

       java.lang.ClassNotFoundException: com.google.common.collect.ImmutableSet      guava-11.0.2.jar
       java.lang.ClassNotFoundException: org.codehaus.jackson.map.JsonMappingException    jackson-mapper-asl-1.8.8.jar
       java.lang.ClassNotFoundException: org.codehaus.jackson.JsonProcessingException    jackson-core-asl-1.8.8.jar


回复

使用道具 举报

hadoop迷 发表于 2014-7-24 17:02:43
?楼主你这个没有配置远程连接信息也能跑通吗?我的报错了java.net.ConnectException: Connection refused: no further information
回复

使用道具 举报

ascentzhen 发表于 2014-8-4 18:26:50
同样的问题,好像是连接不上
2014-08-04 18:24:53,482 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:zookeeper.version=3.4.5--1, built on 10/05/2013 16:05 GMT
2014-08-04 18:24:53,487 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:host.name=Ascent-PC.mshome.net
2014-08-04 18:24:53,487 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.version=1.6.0_21
2014-08-04 18:24:53,487 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.vendor=Sun Microsystems Inc.
2014-08-04 18:24:53,487 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.home=D:\Java\jdk1.6.0_21\jre
2014-08-04 18:24:53,489 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.class.path=D:\workspace\hadoopwp\hbase\bin;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\activation-1.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\aopalliance-1.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\asm-3.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\avro-1.7.4.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-beanutils-1.7.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-beanutils-core-1.8.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-cli-1.2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-codec-1.7.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-collections-3.2.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-compress-1.4.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-configuration-1.6.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-daemon-1.0.13.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-digester-1.8.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-el-1.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-httpclient-3.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-io-2.4.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-lang-2.6.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-logging-1.1.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-math-2.2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\commons-net-3.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\core-3.1.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\findbugs-annotations-1.3.9-1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\gmbal-api-only-3.0.0-b023.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\grizzly-framework-2.1.2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\grizzly-http-2.1.2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\grizzly-http-server-2.1.2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\grizzly-http-servlet-2.1.2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\grizzly-rcm-2.1.2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\guava-12.0.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\guice-3.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\guice-servlet-3.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-annotations-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-auth-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-client-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-common-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-hdfs-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-hdfs-2.2.0-tests.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-mapreduce-client-app-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-mapreduce-client-common-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-mapreduce-client-core-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-mapreduce-client-jobclient-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-mapreduce-client-jobclient-2.2.0-tests.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-mapreduce-client-shuffle-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-yarn-api-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-yarn-client-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-yarn-common-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-yarn-server-common-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hadoop-yarn-server-nodemanager-2.2.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hamcrest-core-1.3.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-client-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-common-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-common-0.98.0-hadoop2-tests.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-examples-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-hadoop2-compat-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-hadoop-compat-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-it-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-it-0.98.0-hadoop2-tests.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-prefix-tree-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-protocol-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-server-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-server-0.98.0-hadoop2-tests.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-shell-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-testing-util-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\hbase-thrift-0.98.0-hadoop2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\high-scale-lib-1.1.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\htrace-core-2.04.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\httpclient-4.1.3.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\httpcore-4.1.3.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jackson-core-asl-1.8.8.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jackson-jaxrs-1.8.8.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jackson-mapper-asl-1.8.8.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jackson-xc-1.8.8.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jamon-runtime-2.3.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jasper-compiler-5.5.23.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jasper-runtime-5.5.23.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\javax.inject-1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\javax.servlet-3.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\javax.servlet-api-3.0.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jaxb-api-2.2.2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jaxb-impl-2.2.3-1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jersey-client-1.9.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jersey-core-1.8.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jersey-grizzly2-1.9.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jersey-guice-1.9.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jersey-json-1.8.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jersey-server-1.8.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jersey-test-framework-core-1.9.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jersey-test-framework-grizzly2-1.9.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jets3t-0.6.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jettison-1.3.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jetty-6.1.26.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jetty-sslengine-6.1.26.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jetty-util-6.1.26.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jruby-complete-1.6.8.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jsch-0.1.42.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jsp-2.1-6.1.14.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jsp-api-2.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jsp-api-2.1-6.1.14.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\jsr305-1.3.9.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\junit-4.11.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\libthrift-0.9.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\log4j-1.2.17.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\management-api-3.0.0-b012.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\metrics-core-2.1.2.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\netty-3.6.6.Final.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\paranamer-2.3.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\protobuf-java-2.5.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\servlet-api-2.5.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\servlet-api-2.5-6.1.14.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\slf4j-api-1.6.4.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\slf4j-log4j12-1.6.4.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\snappy-java-1.0.4.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\stax-api-1.0.1.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\xmlenc-0.52.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\xz-1.0.jar;F:\No2\IT\BigData\hbase-0.98.0-hadoop2\lib\zookeeper-3.4.5.jar;D:\workspace\hadoopwp\hbase\conf
2014-08-04 18:24:53,490 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.library.path=D:\Java\jdk1.6.0_21\bin;.;C:\Windows\Sun\Java\bin;C:\Windows\system32;C:\Windows;D:/Java/jdk1.6.0_21/bin/../jre/bin/server;D:/Java/jdk1.6.0_21/bin/../jre/bin;D:/Java/jdk1.6.0_21/bin/../jre/lib/amd64;C:\Program Files\AuthenTec TrueSuite\;C:\Program Files (x86)\Common Files\NetSarang;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsPowerShell\v1.0\;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Common Files\Intel\WirelessCommon\;D:\Java\jdk1.6.0_21\bin;D:\Oracle10G_Client\instantclient_10_2;D:\apache-ant-1.9.3\bin;D:\hadoop-2.3.0\bin;C:\Program Files\TortoiseSVN\bin;C:\Program Files (x86)\IDM Computer Solutions\UltraEdit\;C:\Program Files (x86)\IDM Computer Solutions\UltraCompare\;D:\hadoop-eclipse;
2014-08-04 18:24:53,491 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.io.tmpdir=C:\Users\Ascent\AppData\Local\Temp\
2014-08-04 18:24:53,491 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:java.compiler=<NA>
2014-08-04 18:24:53,491 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.name=Windows 7
2014-08-04 18:24:53,492 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.arch=amd64
2014-08-04 18:24:53,492 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:os.version=6.1
2014-08-04 18:24:53,492 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.name=Ascent
2014-08-04 18:24:53,492 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.home=C:\Users\Ascent
2014-08-04 18:24:53,492 INFO  [main] zookeeper.ZooKeeper (Environment.java:logEnv(100)) - Client environment:user.dir=D:\workspace\hadoopwp\hbase
2014-08-04 18:24:53,494 INFO  [main] zookeeper.ZooKeeper (ZooKeeper.java:<init>(438)) - Initiating client connection, connectString=master:2181 sessionTimeout=90000 watcher=hconnection-0x667cbde6, quorum=master:2181, baseZNode=/hbase
2014-08-04 18:24:53,583 INFO  [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:<init>(120)) - Process identifier=hconnection-0x667cbde6 connecting to ZooKeeper ensemble=master:2181
2014-08-04 18:24:53,594 INFO  [main-SendThread(master:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server master/192.168.137.11:2181. Will not attempt to authenticate using SASL (无法定位登录配置)
2014-08-04 18:24:54,618 WARN  [main-SendThread(master:2181)] zookeeper.ClientCnxn (ClientCnxn.java:run(1089)) - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused: no further information
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2014-08-04 18:24:54,743 WARN  [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:retryOrThrow(253)) - Possibly transient ZooKeeper, quorum=master:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
2014-08-04 18:24:54,744 INFO  [main] util.RetryCounter (RetryCounter.java:sleepUntilNextRetry(155)) - Sleeping 1000ms before retry #0...
2014-08-04 18:24:55,729 INFO  [main-SendThread(master:2181)] zookeeper.ClientCnxn (ClientCnxn.java:logStartConnect(966)) - Opening socket connection to server master/192.168.137.11:2181. Will not attempt to authenticate using SASL (无法定位登录配置)
2014-08-04 18:24:56,731 WARN  [main-SendThread(master:2181)] zookeeper.ClientCnxn (ClientCnxn.java:run(1089)) - Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused: no further information
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1068)
2014-08-04 18:24:56,831 WARN  [main] zookeeper.RecoverableZooKeeper (RecoverableZooKeeper.java:retryOrThrow(253)) - Possibly transient ZooKeeper, quorum=master:2181, exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
2014-08-04 18:24:56,832 INFO  [main] util.RetryCounter (RetryCounter.java:sleepUntilNextRetry(155)) - Sleeping 2000ms before retry #1...
回复

使用道具 举报

pig2 发表于 2014-8-21 00:50:30
hadoop迷 发表于 2014-7-24 17:02
?楼主你这个没有配置远程连接信息也能跑通吗?我的报错了java.net.ConnectException: Connection refused: ...
不需要配置的,会加载默认的配置。window下,可以重新设置,覆盖一下。
hbase开发环境搭建及运行hbase小实例(HBase 0.98.3新api)
回复

使用道具 举报

buildhappy 发表于 2014-9-24 12:38:49
学习了         
回复

使用道具 举报

buildhappy 发表于 2014-10-8 21:17:56
问一下  使用hbase集群是不是必须有zookeeper的支持啊?
回复

使用道具 举报

bioger_hit 发表于 2014-10-8 22:04:50
buildhappy 发表于 2014-10-8 21:17
问一下  使用hbase集群是不是必须有zookeeper的支持啊?

hbase与zookeeper可以说是不可分割的,及时应分开hbase就不能用了。
回复

使用道具 举报

break-spark 发表于 2014-10-13 17:24:15
不错,成功了,有点费劲
回复

使用道具 举报

123下一页
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

推荐上一条 /2 下一条