本帖最后由 nettman 于 2014-2-5 20:14 编辑
Hadoop集群实践之 (0) 完整架构设计 [Hadoop(HDFS) , HBase , Zookeeper , Flume , Hive ]本文内容
Hadoop集群实践之(0) 完整架构设计[Hadoop(HDFS),HBase,Zookeeper,Flume,Hive] 在进入新的环境以后,面临一项非常重要的工作“线上的Hadoop集群的运维”,而对于Hadoop我之前仅仅是耳闻,并没有实际的部署和应用过。
因此,在整个学习过程中参阅了很多的资料,经历了从对整个架构的不了解,各种概念的不熟悉到最后成功的在本地搭建好了整个集群并能够成功调用集群来执行并行计算的任务。
可谓受益良多,本着开源分享的精神,我将整个学习的过程整理成一个系列的文档,分享给大家。 下面进入正文
有一种说法我觉得非常有道理,那就是:“搞什么东西之前,第一步是要知道What(是什么),然后是Why(为什么),最后才是How(怎么做)。如果习惯先How,然后What,最后才是Why,只会让自己变得浮躁,同时往往会将技术误用于不适合的场景。” 因此,首先应该明确的就是:什么是Hadoop? Hadoop是Apache开源组织的一个分布式计算开源框架,目前在Amazon,Facebook和Yahoo等大型网站上都已经得到了应用。 Hadoop框架中最核心的设计就是:MapReduce和HDFS。
MapReduce的思想是由Google的一篇论文所提及而被广为流传的,简单的一句话解释就是“任务的分解与结果的汇总”。
分布式计算就好比蚂蚁吃大象,廉价的机器群可以匹敌任何高性能的计算机,纵向扩展始终抵不过横向扩展。
HDFS是Hadoop分布式文件系统(HadoopDistributed File System)的缩写,为分布式计算存储提供了底层支持。 在实际的应用场景中,Hadoop并非单独出现的,它还有着众多的子项目,而应用最多的子项目就是HBase,Zookeeper,Hive。
另外,Flume也经常在整个Hadoop集群中担任重要的角色。 简单的描述一下这些系统:
HBase - Key/Value的分布式数据库
Zookeeper - 支撑分布式应用的协作系统
Hive - SQL解析引擎
Flume - 分布式的日志收集系统 下面这张图可以更好的阐述它们之间的关系:
在上面的图中,我们通过跟踪整个数据流的逻辑对每个服务器实现的功能和角色进行了描述。
总的说来就是一次通过采集原始Log文件,然后对Log文件进行分析计算,最后将结果进行存储并展现的过程。 需要介绍一点背景的是,因为是在虚拟机上搭建测试环境(条件有限),因此很多在实际场景中一般都应该分开在不同机器上部署的系统,都部署在了一起。 具体环境如下
OS: Ubuntu 10.10 Server 64-bit //选择Ubuntu是因为Cloudera的Hadoop集群套件对Ubuntu支持的很好
Servers:
hadoop-master:10.6.1.150
- namenode,jobtracker;hbase-master,hbase-thrift;
- secondarynamenode;
- hive-master,hive-metastore;
- zookeeper-server;
- flume-master
- flume-node
- datanode,taskTracker hadoop-node-1:10.6.1.151
- datanode,taskTracker;hbase-regionServer;
- zookeeper-server;
- flume-node hadoop-node-2:10.6.1.152
- dataNode,taskTracker;hbase-regionServer;
- zookeeper-server;
- flume-node 以上描述了三台机器各自的角色,通过 “-” 表示的每一行的几个角色都表示一般情况下是一起部署到独立的服务器上的。 下面,就让我们开始整个实战演练过程。
Hadoop集群实践之 (1) Hadoop(HDFS)搭建本文内容
Hadoop集群实践之(1) Hadoop(HDFS)搭建 参考资料 http://www.infoq.com/cn/articles/hadoop-intro http://www.infoq.com/cn/articles/hadoop-config-tip http://www.infoq.com/cn/articles/hadoop-process-develop http://www.docin.com/p-238056369.html 安装配置Hadoop(HDFS)
环境准备
OS: Ubuntu 10.10 Server 64-bit
Servers:
hadoop-master:10.6.1.150 内存1024M
- namenode,jobtracker;
- secondarynamenode;
- datanode,taskTracker hadoop-node-1:10.6.1.151内存640M
- datanode,taskTracker; hadoop-node-2:10.6.1.152内存640M
- dataNode,taskTracker; 对以上角色做一些简单的介绍:
namenode - 整个HDFS的命名空间管理服务
secondarynamenode - 可以看做是namenode的冗余服务
jobtracker - 并行计算的job管理服务
datanode - HDFS的节点服务
tasktracker - 并行计算的job执行服务 本文定义的规范,避免在配置多台服务器上产生理解上的混乱:
所有直接以 $ 开头,没有跟随主机名的命令,都代表需要在所有的服务器上执行,除非后面有单独的//开头的说明。 1.选择最好的安装包
为了更方便和更规范的部署Hadoop集群,我们采用Cloudera的集成包。
因为Cloudera对Hadoop相关的系统做了很多优化,避免了很多因各个系统间版本不符产生的很多Bug。
这也是很多资深Hadoop管理员所推荐的。 https://ccp.cloudera.com/display/DOC/Documentation//
2.安装Java环境
由于整个Hadoop项目主要是通过Java开发完成的,因此需要JVM的支持。
添加匹配的Java版本的APT源
$ sudo apt-get install python-software-properties
$ sudo vim/etc/apt/sources.list.d/sun-java-community-team-sun-java6-maverick.list
- deb http://ppa.launchpad.net/sun-java-community-team/sun-java6/ubuntu maverick main
-
- deb-src http://ppa.launchpad.net/sun-java-community-team/sun-java6/ubuntu maverick main
复制代码
安装sun-java6-jdk
$ sudo add-apt-repository ppa:sun-java-community-team/sun-java6
$ sudo apt-get update
$ sudo apt-get install sun-java6-jdk
3. 配置Cloudera的Hadoop安装源
$ sudo vim /etc/apt/sources.list.d/cloudera.list- deb http://archive.cloudera.com/debian maverick-cdh3u3 contrib
-
- deb-src http://archive.cloudera.com/debian maverick-cdh3u3 contrib
复制代码
4.安装Hadoop相关套件
$ sudo apt-get install hadoop-0.20-namenode //仅在hadoop-master上安装
$ sudo apt-get install hadoop-0.20-datanode
$ sudo apt-get install hadoop-0.20-secondarynamenode //仅在hadoop-master上安装
$ sudo apt-get install hadoop-0.20-jobtracker //仅在hadoop-master上安装
$ sudo apt-get install hadoop-0.20-tasktracker 5.创建Hadoop配置文件
$ sudo cp -r /etc/hadoop-0.20/conf.empty /etc/hadoop-0.20/conf.my_cluster 6.激活新的配置文件
$ sudo update-alternatives --install /etc/hadoop-0.20/conf hadoop-0.20-conf/etc/hadoop-0.20/conf.my_cluster 50
这里的50是优先级别 查询当前配置的选择
$ sudo update-alternatives --display hadoop-0.20-conf
- hadoop-0.20-conf - auto mode
- link currently points to /etc/hadoop-0.20/conf.my_cluster
- /etc/hadoop-0.20/conf.empty - priority 10
- /etc/hadoop-0.20/conf.my_cluster - priority 50
- /etc/hadoop-0.20/conf.pseudo - priority 30
- Current 'best' version is '/etc/hadoop-0.20/conf.my_cluster'.
复制代码
结果说明my_cluster的优先级别最高,即'best'。 7.添加hosts记录并修改对应的主机名,以hadoop-master为例
$ sudo vim /etc/hosts
- 10.6.1.150 hadoop-master
- 10.6.1.150 hadoop-secondary
- 10.6.1.151 hadoop-node-1
- 10.6.1.152 hadoop-node-2
复制代码
$ sudo vim /etc/hostname 1 hadoop-master $ sudo hostname hadoop-master 8. 设置服务器间的SSH密钥认证 dongguo@hadoop-master:~$ ssh-keygen -t rsa - Generating public/private rsa key pair.
- Enter file in which to save the key (/home/dongguo/.ssh/id_rsa):
- Created directory '/home/dongguo/.ssh'.
- Enter passphrase (empty for no passphrase):
- Enter same passphrase again:
复制代码
dongguo@hadoop-master:~$ cd /home/dongguo/.ssh
dongguo@hadoop-master:~$ cp id_rsa.pub authorized_keys
dongguo@hadoop-master:~$ chmod 600 authorized_keys
建立与hadoop-node-1和hadoop-node-2的信任关系,实现SSH免密码登陆
dongguo@hadoop-master:~$ sudo ssh-copy-id /home/dongguo/.ssh/id_rsa.pub dongguo@10.6.1.151
dongguo@hadoop-master:~$ sudo ssh-copy-id /home/dongguo/.ssh/id_rsa.pub dongguo@10.6.1.152
9. 配置hadoop/conf下的文件
$ sudo vim /etc/hadoop/conf/hadoop-env.sh
1 export JAVA_HOME="/usr/lib/jvm/java-6-sun"
$ sudo vim /etc/hadoop/conf/masters
1 hadoop-master
$ sudo vim /etc/hadoop/conf/slaves
1 hadoop-node-1
2 hadoop-node-2
10. 创建hadoop的HDFS目录
$ sudo mkdir -p /data/storage
$ sudo mkdir -p /data/hdfs
$ sudo chmod 700 /data/hdfs
$ sudo chown -R hdfs:hadoop /data/hdfs
$ sudo chmod 777 /data/storage
$ sudo chmod o+t /data/storage
11. 配置core-site.xml - Hadoop
$ sudo vim /etc/hadoop/conf/core-site.xml- <?xml version="1.0"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
- <configuration>
- <!--- global properties -->
- <property>
- <name>hadoop.tmp.dir</name>
- <value>/data/storage</value>
- <description>A directory for other temporary directories.</description>
- </property>
- <!-- file system properties -->
- <property>
- <name>fs.default.name</name>
- <value>hdfs://hadoop-master:8020</value>
-
- </property>
- </configuration>
复制代码
hadoop.tmp.dir指定了所有上传到Hadoop的文件的存放目录,所以要确保这个目录是足够大的。
fs.default.name指定NameNode的地址和端口号。
12. 配置hdfs-site.xml - HDFS
$ sudo vim /etc/hadoop/conf/hdfs-site.xml-
- <?xml version="1.0"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
- <configuration>
- <property>
- <name>dfs.name.dir</name>
- <value>${hadoop.tmp.dir}/dfs/name</value>
- </property>
- <property>
- <name>dfs.data.dir</name>
- <value>/data/hdfs</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>2</value>
- </property>
- <property>
- <name>dfs.datanode.max.xcievers</name>
- <value>4096</value>
- </property>
-
- <property>
- <name>fs.checkpoint.period</name>
- <value>300</value>
- </property>
- <property>
- <name>fs.checkpoint.dir</name>
- <value>${hadoop.tmp.dir}/dfs/namesecondary</value>
- </property>
- <property>
- <name>dfs.namenode.secondary.http-address</name>
- <value>hadoop-secondary:50090</value>
- </property>
- </configuration>
复制代码
dfs.data.dir指定数据节点存放数据的位置。
dfs.replication指定每个Block需要备份的次数,起到冗余备份的作用,值必须小于DataNode的数目,否则会出错。
dfs.datanode.max.xcievers指定了HDFSDatanode同时处理文件的上限。 13.配置mapred-site.xml- MapReduce
$ sudo vim /etc/hadoop/conf/mapred-site.xml
- <?xml version="1.0"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
- <configuration>
- <property>
- <name>mapred.job.tracker</name>
- <value>hdfs://hadoop-master:8021</value>
-
- </property>
- <property>
- <name>mapred.system.dir</name>
- <value>/mapred/system</value>
- </property>
- <property>
- <name>mapreduce.jobtracker.staging.root.dir</name>
- <value>/user</value>
- </property>
- </configuration>
复制代码
mapred.job.tracker定位jobtracker的地址和端口。
mapred.system.dir定位存放在HDFS中的目录。 注:以上配置需要在所有的服务器上进行,鉴于配置内容完全一致,可以在一台服务器上配置完成以后,将整个/etc/hadoop-0.20目录覆盖到其它服务器上。 14.格式化HDFS分布式文件系统
dongguo@hadoop-master:~$ sudo -u hdfs hadoop namenode -format
- 12/10/07 21:52:48 INFO namenode.NameNode: STARTUP_MSG:
- /************************************************************
- STARTUP_MSG: Starting NameNode
- STARTUP_MSG: host = hadoop-master/10.6.1.150
- STARTUP_MSG: args = [-format]
- STARTUP_MSG: version = 0.20.2-cdh3u3
-
复制代码
15.启动Hadoop进程
在hadoop-master上启动datanode,namenode,secondarynamenode,jobtracker
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-0.20-datanode start
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-0.20-namenode start
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-0.20-jobtracker start
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-0.20-secondarynamenode start 在hadoop-node上启动datanode,tasktracker,以hadoop-node-1为例
dongguo@hadoop-node-1:~$ sudo /etc/init.d/hadoop-0.20-datanode start
dongguo@hadoop-node-1:~$ sudo /etc/init.d/hadoop-0.20-tasktracker start 检查进程启动情况,以hadoop-master与hadoop-node-1为例
dongguo@hadoop-master:~$ sudo jps
- 12683 SecondaryNameNode
- 12716 Jps
- 9250 JobTracker
- 11736 DataNode
- 5816 NameNode
复制代码
dongguo@hadoop-node-1:~$sudo jps - 4591 DataNode
- 7093 Jps
- 6853 TaskTracker
复制代码
16. 创建mapred.system.dir的HDFS目录 dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -mkdir /mapred/system dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chown mapred:hadoop /mapred/system 17. 测试对HDFS的相关操作 dongguo@hadoop-master:~$ echo "Hello Hadoop." > hello.txt dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -mkdir /dongguo dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -copyFromLocal hello.txt /dongguo dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -ls /dongguo - Found 1 items
- -rw-r--r-- 2 hdfs supergroup 33 2012-10-07 22:02 /dongguo/hello.txt
- hdfs@hadoop-master:~$ hadoop fs -cat /dongguo/hello.txt
- Hello Hadoop. echo Hello Hadoop.
复制代码
18.查看整个集群的状态
通过网页进行查看 http://10.6.1.150:50070 http://10.6.1.150:50030
通过命令进行查看
dongguo@hadoop-master:~$ sudo -u hdfs hadoop dfsadmin -report- 12/10/07 23:22:01 INFO security.UserGroupInformation: JAAS Configuration already setup for Hadoop, not re-installing.
- Configured Capacity: 59858374656 (55.75 GB)
- Present Capacity: 53180121088 (49.53 GB)
- DFS Remaining: 53180010496 (49.53 GB)
- DFS Used: 110592 (108 KB)
- DFS Used%: 0%
- Under replicated blocks: 0
- Blocks with corrupt replicas: 0
- Missing blocks: 0
-
- -------------------------------------------------
- Datanodes available: 3 (3 total, 0 dead)
-
- Name: 10.6.1.152:50010
- Decommission Status : Normal
- Configured Capacity: 19952791552 (18.58 GB)
- DFS Used: 28672 (28 KB)
- Non DFS Used: 2221387776 (2.07 GB)
- DFS Remaining: 17731375104(16.51 GB)
- DFS Used%: 0%
- DFS Remaining%: 88.87%
- Last contact: Sun Oct 07 23:22:01 CST 2012
-
-
- Name: 10.6.1.151:50010
- Decommission Status : Normal
- Configured Capacity: 19952791552 (18.58 GB)
- DFS Used: 40960 (40 KB)
- Non DFS Used: 2221387776 (2.07 GB)
- DFS Remaining: 17731362816(16.51 GB)
- DFS Used%: 0%
- DFS Remaining%: 88.87%
- Last contact: Sun Oct 07 23:22:01 CST 2012
-
-
- Name: 10.6.1.150:50010
- Decommission Status : Normal
- Configured Capacity: 19952791552 (18.58 GB)
- DFS Used: 40960 (40 KB)
- Non DFS Used: 2235478016 (2.08 GB)
- DFS Remaining: 17717272576(16.5 GB)
- DFS Used%: 0%
- DFS Remaining%: 88.8%
- Last contact: Sun Oct 07 23:22:01 CST 2012
复制代码
至此,Hadoop(HDFS)的搭建就已经完成。 20.接着,我们可以开始以下过程: Hadoop集群实践之 (2) HBase&Zookeeper搭建本文内容
Hadoop集群实践之(2) HBase&Zookeeper搭建 参考资料 http://blog.csdn.net/hguisu/article/details/7244413 http://www.yankay.com/wp-content/hbase/book.html http://blog.nosqlfan.com/html/3694.html 安装配置HBase&Zookeeper
环境准备
OS: Ubuntu 10.10 Server 64-bit
Servers:
hadoop-master:10.6.1.150 内存1024M
- namenode,jobtracker;hbase-master,hbase-thrift;
- secondarynamenode;
- zookeeper-server;
- datanode,taskTracker hadoop-node-1:10.6.1.151内存640M
- datanode,taskTracker;hbase-regionServer;
- zookeeper-server; hadoop-node-2:10.6.1.152内存640M
- datanode,taskTracker;hbase-regionServer;
- zookeeper-server; 对以上角色做一些简单的介绍:
namenode - 整个HDFS的命名空间管理服务
secondarynamenode - 可以看做是namenode的冗余服务
jobtracker - 并行计算的job管理服务
datanode - HDFS的节点服务
tasktracker - 并行计算的job执行服务
hbase-master - Hbase的管理服务
hbase-regionServer - 对Client端插入,删除,查询数据等提供服务
zookeeper-server - Zookeeper协作与配置管理服务 将HBase与Zookeeper的安装配置放在同一篇文章的原因是:
一个分布式运行的Hbase依赖一个zookeeper集群,所有的节点和客户端都必须能够访问zookeeper。 本文定义的规范,避免在配置多台服务器上产生理解上的混乱:
所有直接以 $ 开头,没有跟随主机名的命令,都代表需要在所有的服务器上执行,除非后面有单独的//开头的说明。 1.安装前的准备
已经完成了 Hadoop集群实践之 (1)Hadoop(HDFS)搭建 配置NTP时钟同步
dongguo@hadoop-master:~$ sudo /etc/init.d/ntp start
dongguo@hadoop-node-1:~$ sudo ntpdate hadoop-master
dongguo@hadoop-node-2:~$ sudo ntpdate hadoop-master 配置ulimit与nproc参数
$ sudo vim /etc/security/limits.conf - hdfs - nofile 32768
- hbase - nofile 32768
-
- hdfs soft nproc 32000
- hdfs hard nproc 32000
-
- hbase soft nproc 32000
- hbase hard nproc 32000
复制代码
$ sudo vim/etc/pam.d/common-session - session required pam_limits.so
复制代码
退出并重新登录SSH使设置生效
2.在hadoop-master上安装hbase-master
dongguo@hadoop-master:~$ sudo apt-get install hadoop-hbase-master
dongguo@hadoop-master:~$ sudo apt-get install hadoop-hbase-thrift 3.在hadoop-node上安装hbase-regionserver
dongguo@hadoop-node-1:~$ sudo apt-get install hadoop-hbase-regionserver
dongguo@hadoop-node-2:~$ sudo apt-get install hadoop-hbase-regionserver 4.在HDFS中创建HBase的目录
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -mkdir /hbase
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chown hbase /hbase 5.配置hbase-env.sh
$ sudo vim /etc/hbase/conf/hbase-env.sh
- export JAVA_HOME="/usr/lib/jvm/java-6-sun"
- export HBASE_MANAGES_ZK=true
复制代码
6.配置hbase-site.xml
$ sudo vim /etc/hbase/conf/hbase-site.xml
- <?xml version="1.0"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
- <configuration>
- <property>
- <name>hbase.rootdir</name>
- <value>hdfs://hadoop-master:8020/hbase</value>
-
- </property>
- <property>
- <name>hbase.cluster.distributed</name>
- <value>true</value>
- </property>
-
- <property>
- <name>hbase.zookeeper.quorum</name>
- <value>hadoop-master,hadoop-node-1,hadoop-node-2</value>
- </property>
- </configuration>
复制代码
hbase.cluster.distributed指定了Hbase的运行模式。false是单机模式,true是分布式模式。
hbase.rootdir目录是regionserver的共享目录,用来持久化Hbase。 hbase.zookeeper.quorum是Zookeeper集群的地址列表,用逗号分割。 运行一个zookeeper也是可以的,但是在生产环境中,最好部署3,5,7个节点。
部署的越多,可靠性就越高,当然只能部署奇数个,偶数个是不可以的。
需要给每个zookeeper1G左右的内存,如果可能的话,最好有独立的磁盘确保zookeeper是高性能的。
如果你的集群负载很重,不要把Zookeeper和RegionServer运行在同一台机器上面,就像DataNodes和TaskTrackers一样。 7.配置regionservers
$ sudo vim /etc/hbase/conf/regionservers
- hadoop-node-1
- hadoop-node-2
复制代码
8.安装Zookeeper
$ sudo apt-get install hadoop-zookeeper-server $ sudo vim/etc/zookeeper/zoo.cfg
- tickTime=2000
- initLimit=10
- syncLimit=5
- dataDir=/data/zookeeper
- clientPort=2181
- maxClientCnxns=0
- server.1=hadoop-master:2888:3888
- server.2=hadoop-node-1:2888:3888
- server.3=hadoop-node-2:2888:3888
复制代码
$sudo mkdir /data/zookeeper
$ sudo chown zookeeper:zookeeper /data/zookeeper 9.创建myid文件
1.dongguo@hadoop-master:~$ sudo -u zookeeper vim /data/zookeeper/myid
2.dongguo@hadoop-node-1:~$ sudo -u zookeeper vim /data/zookeeper/myid
3.dongguo@hadoop-node-2:~$ sudo -u zookeeper vim /data/zookeeper/myid
10.启动Hbase与Zookeeper服务
在hadoop-master上
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-zookeeper-server start
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-hbase-master start
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-hbase-thrift start 在hadoop-node上
dongguo@hadoop-node-1:~$ sudo /etc/init.d/hadoop-zookeeper-server start
dongguo@hadoop-node-1:~$ sudo /etc/init.d/hadoop-hbase-regionserver start
dongguo@hadoop-node-2:~$ sudo /etc/init.d/hadoop-zookeeper-server start
dongguo@hadoop-node-2:~$ sudo /etc/init.d/hadoop-hbase-regionserver start
11.查看服务的状态在hadoop-node上
dongguo@hadoop-node-1:~$ sudo /etc/init.d/hadoop-zookeeper-server start
dongguo@hadoop-node-1:~$ sudo /etc/init.d/hadoop-hbase-regionserver start
dongguo@hadoop-node-2:~$ sudo /etc/init.d/hadoop-zookeeper-server start
dongguo@hadoop-node-2:~$ sudo /etc/init.d/hadoop-hbase-regionserver start
12.进行HBaseShell练习对HBase的操作
dongguo@hadoop-master:~$ hbase shell
13.通过zookeeper-client对Zookeeper进行操作
dongguo@hadoop-master:~$ zookeeper-client
-
- [zk: localhost:2181(CONNECTED) 0] ls /
- [hbase, zookeeper]
- [zk: localhost:2181(CONNECTED) 1] ls /hbase
- [splitlog, unassigned, rs, root-region-server, table, master, shutdown]
- [zk: localhost:2181(CONNECTED) 2] get /hbase/rs
-
- cZxid = 0x100000004
- ctime = Mon Oct 08 19:31:27 CST 2012
- mZxid = 0x100000004
- mtime = Mon Oct 08 19:31:27 CST 2012
- pZxid = 0x10000001c
- cversion = 2
- dataVersion = 0
- aclVersion = 0
- ephemeralOwner = 0x0
- dataLength = 0
- numChildren = 2
- [zk: localhost:2181(CONNECTED) 4] get /hbase/master
- hadoop-master:60000
- cZxid = 0x100000007
- ctime = Mon Oct 08 19:31:27 CST 2012
- mZxid = 0x100000007
- mtime = Mon Oct 08 19:31:27 CST 2012
- pZxid = 0x100000007
- cversion = 0
- dataVersion = 0
- aclVersion = 0
- ephemeralOwner = 0x23a4021bcb60000
- dataLength = 19
- numChildren = 0
- . 至此,HBase&Zookeeper的搭建就已经完成。
- . 接着,我们可以开始以下过程:
复制代码
Hadoop集群实践之 (3) Flume搭建
本文内容
Hadoop集群实践之 (3) Flume搭建
参考资料
http://www.cnblogs.com/oubo/archive/2012/05/25/2517751.html
http://archive.cloudera.com/cdh/3/flume/UserGuide/
http://log.medcl.net/item/2012/03/flume-build-process/
http://blog.csdn.net/rzhzhz/article/details/7449767
https://blogs.apache.org/flume/entry/flume_ng_architecture
安装配置Flume
环境准备
OS: Ubuntu 10.10 Server 64-bit
Servers:
hadoop-master:10.6.1.150 内存1024M
- namenode,jobtracker;hbase-master,hbase-thrift;
- secondarynamenode;
- zookeeper-server;
- datanode,taskTracker
- flume-master
- flume-node
hadoop-node-1:10.6.1.151 内存640M
- datanode,taskTracker;hbase-regionServer;
- zookeeper-server;
- flume-node
hadoop-node-2:10.6.1.152 内存640M
- datanode,taskTracker;hbase-regionServer;
- zookeeper-server;
- flume-node
对以上角色做一些简单的介绍:
namenode - 整个HDFS的命名空间管理服务
secondarynamenode - 可以看做是namenode的冗余服务
jobtracker - 并行计算的job管理服务
datanode - HDFS的节点服务
tasktracker - 并行计算的job执行服务
hbase-master - Hbase的管理服务
hbase-regionServer - 对Client端插入,删除,查询数据等提供服务
zookeeper-server - Zookeeper协作与配置管理服务
flume-master - Flume的管理服务
flume-node - Flume的Agent与Collector服务,具体角色由flume-master决定
本文定义的规范,避免在配置多台服务器上产生理解上的混乱:
所有直接以 $ 开头,没有跟随主机名的命令,都代表需要在所有的服务器上执行,除非后面有单独的//开头的说明。
1. 安装前的准备
已经完成了 Hadoop集群实践之 (2) HBase&Zookeeper搭建
Flume分为三种角色:
Mater: master负责配置及通信管理,是集群的控制器。
Collector: collector用于对数据进行聚合,往往会产生一个更大的流,然后加载到storage中。
Agent: Agent用于采集数据,agent是flume中产生数据流的地方,同时,agent会将产生的数据流传输到collector。
Collector和Agent的配置数据必须指定Source(可以理解为数据入口)和Sink(可以理解为数据出口)。
常用的source:
text(“filename”):将文件filename作为数据源,按行发送
tail(“filename”):探测filename新产生的数据,按行发送出去
fsyslogTcp(5140):监听TCP的5140端口,并且接收到的数据发送出去
常用的sink:
console[("format")] :直接将将数据显示在桌面上
text(“txtfile”):将数据写到文件txtfile中
dfs(“dfsfile”):将数据写到HDFS上的dfsfile文件中
syslogTcp(“host”,port):将数据通过TCP传递给host节点
2. 安装Flume agent和Flume collector
$ sudo apt-get install flume-node
3. 安装Flume master
dongguo@hadoop-master:~$ sudo apt-get install flume-master
4. 启动Flume服务
$ sudo /etc/init.d/flume-node start
dongguo@hadoop-master:~$ sudo /etc/init.d/flume-master start
5. 配置flume-conf.xml
$ sudo vim /etc/flume/conf/flume-conf.xml
修改以下几段中的路径参数- <property>
- <name>flume.collector.output.format</name>
- <value>raw</value>
- <description>The output format for the data written by a Flume
- collector node. There are several formats available:
- syslog - outputs events in a syslog-like format
- log4j - outputs events in a pattern similar to Hadoop's log4j pattern
- raw - Event body only. This is most similar to copying a file but
- does not preserve any uniqifying metadata like host/timestamp/nanos.
- avro - Avro Native file format. Default currently is uncompressed.
- avrojson - this outputs data as json encoded by avro
- avrodata - this outputs data as a avro binary encoded data
- debug - used only for debugging
- </description>
- </property>
-
- <property>
- <name>flume.master.servers</name>
- <value>hadoop-master</value>
- <description>A comma-separated list of hostnames, one for each
- machine in the Flume Master.
- </description>
- </property>
-
- <property>
- <name>flume.collector.dfs.dir</name>
- <value>file:///data/flume/tmp/flume-${user.name}/collected</value>
- <description>This is a dfs directory that is the the final resting
- place for logs to be stored in. This defaults to a local dir in
- /tmp but can be hadoop URI path that such as hdfs://namenode/path/
- </description>
- </property>
-
- <property>
- <name>flume.master.zk.logdir</name>
- <value>/data/flume/tmp/flume-${user.name}-zk</value>
- <description>The base directory in which the ZBCS stores data.</description>
- </property>
-
- <property>
- <name>flume.agent.logdir</name>
- <value>/data/flume/tmp/flume-${user.name}/agent</value>
- <description> This is the directory that write-ahead logging data
- or disk-failover data is collected from applications gets
- written to. The agent watches this directory.
- </description>
- </property>
复制代码
创建Flume所需的本地与HDFS目录
$ sudo mkdir -p /data/flume/tmp
$ sudo chown -R flume:flume /data/flume/
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -mkdir -p /data/flume/tmp
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chown -R flume:flume /data/flume/
以上配置主要是修改Flume的tmp目录并将日志文件内容以原始格式保存。
6. 配置flume-site.xml
$ sudo vim /etc/flume/conf/flume-site.xml- <?xml version="1.0"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
- <configuration>
- <property>
- <name>flume.master.servers</name>
- <value>hadoop-master</value>
- </property>
- <property>
- <name>flume.master.store</name>
- <value>zookeeper</value>
- </property>
- <property>
- <name>flume.master.zk.use.external</name>
- <value>true</value>
- </property>
- <property>
- <name>flume.master.zk.servers</name>
- <value>hadoop-master:2181,hadoop-node-1:2181,hadoop-node-2:2181</value>
- </property>
- </configuration>
复制代码
以上配置主要是通过Zookeeper来管理Flume的配置并指定Flume master。
7. 操作Flume shell
dongguo@hadoop-master:~$ flume shell- [flume (disconnected)] connect localhost
- Using default admin port: 35873
- Using default report port: 45678
- Connecting to Flume master localhost:35873:45678...
- 2012-10-08 21:52:50,876 [main] INFO util.AdminRPCThrift: Connected to master at localhost:35873
- [flume localhost:35873:45678] getnodestatus
- Master knows about 3 nodes
- hadoop-node-1 --> IDLE
- hadoop-node-2 --> IDLE
- hadoop-master --> IDLE
复制代码
8. 检查Zookeeper中的配置
dongguo@hadoop-master:~$ zookeeper-client
- [zk: localhost:2181(CONNECTED) 0] ls /
- [flume-cfgs, counters-config_version, flume-chokemap, hbase, zookeeper, flume-nodes]
复制代码
Web界面方式查看 http://10.6.1.150:35871 http://10.6.1.150:35862 http://10.6.1.151:35862 http://10.6.1.152:35862
9. 配置Source,Sink与Collector
设计思路,通过配置hadoop-node-1与hadoop-node-2为agent,负责采集log。
然后hadoop-master作为collector,接收log并写入HDFS中。
下面将模拟在hadoop-1上采集2个log,在hadoop-node-2上采集1个log,在hadoop-master上监听3个端口来接收log并写入HDFS中。 创建存放log的HDFS目录
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -mkdir -p /flume/rawdata
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chown -R hdfs /flume
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chmod 777 /flume/rawdata 下面的log内容为测试数据,实际情况可以按照自身业务的特点进行写入,分隔符可以用逗号","或TAB制表符,我这里选用制表符。
在hadoop-node-1与hadoop-node-2上创建log文件 /var/log/rawdata.log
$ sudo mkdir /var/log/rawdata/ 在hadoop-node-1上
dongguo@hadoop-node-1:~$ sudo vim /var/log/rawdata/rawdata-1.log
- hadoop-node-1-rawdata-1-value1
- hadoop-node-1-rawdata-1-value2
- hadoop-node-1-rawdata-1-value3
- hadoop-node-1-rawdata-1-value4
- hadoop-node-1-rawdata-1-value5
- hadoop-node-1-rawdata-1-value6
复制代码
dongguo@hadoop-node-1:~$ sudo vim/var/log/rawdata/rawdata-2.log
- hadoop-node-1-rawdata-2-value1
- hadoop-node-1-rawdata-2-value2
复制代码
在hadoop-node-2上
dongguo@hadoop-node-2:~$ sudo vim /var/log/rawdata/rawdata-1.log
- hadoop-node-2-rawdata-1-value1
- hadoop-node-2-rawdata-1-value2
复制代码
配置Flume Agent与Collector
dongguo@hadoop-master:~$ flume shell
- [flume (disconnected)] connect hadoop-master
- [flume hadoop-master:35873:45678] exec map hadoop-node-1 agent-1-a
- [flume hadoop-master:35873:45678] exec map hadoop-node-1 agent-1-b
- [flume hadoop-master:35873:45678] exec map hadoop-node-2 agent-2-a
- [flume hadoop-master:35873:45678] exec config agent-1-a 'tail("/var/log/rawdata/rawdata-1.log",true)' 'agentE2EChain("hadoop-master:35852")'
- [flume hadoop-master:35873:45678] exec config agent-1-b 'tail("/var/log/rawdata/rawdata-2.log",true)' 'agentE2EChain("hadoop-master:35853")'
- [flume hadoop-master:35873:45678] exec config agent-2-a 'tail("/var/log/rawdata/rawdata-1.log",true)' 'agentE2EChain("hadoop-master:35854")'
- [flume hadoop-master:35873:45678] exec map hadoop-master clct-1-a
- [flume hadoop-master:35873:45678] exec map hadoop-master clct-1-b
- [flume hadoop-master:35873:45678] exec map hadoop-master clct-2-a
- [flume hadoop-master:35873:45678] exec config clct-1-a 'collectorSource(35852)' 'collectorSink("hdfs://hadoop-master/flume/rawdata/%Y/%m/%d/%H%M/", "rawdata-1-a")'
- [flume hadoop-master:35873:45678] exec config clct-1-b 'collectorSource(35853)' 'collectorSink("hdfs://hadoop-master/flume/rawdata/%Y/%m/%d/%H%M/", "rawdata-1-b")'
- [flume hadoop-master:35873:45678] exec config clct-2-a 'collectorSource(35854)' 'collectorSink("hdfs://hadoop-master/flume/rawdata/%Y/%m/%d/%H%M/", "rawdata-2-a")'
- [flume hadoop-master:35873:45678]
复制代码
在界面上可以看到以上配置已经生效: http://10.6.1.150:35871
测试往log中添加新的记录,出发日志的增量收集。 在hadoop-node-1上 dongguo@hadoop-node-1:~$ sudo vim /var/log/rawdata/rawdata-1.log - hadoop-node-1-rawdata-1-value7
复制代码
查看日志是否被收集到HDFS
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -lsr /flume/rawdata
- drwxrwxrwx - flume supergroup 0 2012-10-10 02:52 /flume/rawdata/2012
- drwxrwxrwx - flume supergroup 0 2012-10-10 02:52 /flume/rawdata/2012/10
- drwxrwxrwx - flume supergroup 0 2012-10-10 02:52 /flume/rawdata/2012/10/10
- drwxrwxrwx - flume supergroup 0 2012-10-10 02:52 /flume/rawdata/2012/10/10/0252
- -rw-r--r-- 2 flume supergroup 245 2012-10-10 02:52 /flume/rawdata/2012/10/10/0252/rawdata-1-a20121010-025216795+0800.26693113439423.00000029
复制代码
查看收集到的文件内容
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -cat/flume/rawdata/2012/10/10/0252/rawdata-1-a20121010-025216795+0800.26693113439423.00000029
- hadoop-node-1-rawdata-1-value1
- hadoop-node-1-rawdata-1-value2
- hadoop-node-1-rawdata-1-value3
- hadoop-node-1-rawdata-1-value4
- hadoop-node-1-rawdata-1-value5
- hadoop-node-1-rawdata-1-value6
- hadoop-node-1-rawdata-1-value7
复制代码
至此,我们可以看到日志文件在更改过后已经成功的被收集到了。 10. 至此,Flume的搭建就已经完成。 11. 接着,我们可以开始以下过程: Hadoop集群实践之 (4) Hive搭建本文内容
Hadoop集群实践之(4) Hive搭建 安装配置Hive
OS: Ubuntu 10.10 Server 64-bit
Servers:
hadoop-master:10.6.1.150
- namenode,jobtracker;hbase-master,hbase-thrift;
- secondarynamenode;
- hive-master,hive-metastore;
- zookeeper-server;
- flume-master
- flume-node
- datanode,taskTracker hadoop-node-1:10.6.1.151
- datanode,taskTracker;hbase-regionServer;
- zookeeper-server;
- flume-node hadoop-node-2:10.6.1.152
- dataNode,taskTracker;hbase-regionServer;
- zookeeper-server;
- flume-node 对以上角色做一些简单的介绍:
namenode - 整个HDFS的命名空间管理服务
secondarynamenode - 可以看做是namenode的冗余服务
jobtracker - 并行计算的job管理服务
datanode - HDFS的节点服务
tasktracker - 并行计算的job执行服务
hbase-master - Hbase的管理服务
hbase-regionServer - 对Client端插入,删除,查询数据等提供服务
zookeeper-server - Zookeeper协作与配置管理服务
flume-master - Flume的管理服务
flume-node - Flume的Agent与Collector服务,具体角色由flume-master决定
hive-master - Hive的管理服务
hive-metastore - Hive的元存储,用于对元数据进行类型检查与语法分析 本文定义的规范,避免在配置多台服务器上产生理解上的混乱:
所有直接以 $ 开头,没有跟随主机名的命令,都代表需要在所有的服务器上执行,除非后面有单独的//开头的说明。 1.安装前的准备
已经完成了 Hadoop集群实践之 (3) Flume搭建 Hive其实就是一个SQL解析引擎,它将SQL语句转译成M/R JOB然后在Hadoop执行,来达到快速开发的目的。
Hive是由Facebook贡献给Hadoop开源社区的。 2.安装Hive
dongguo@hadoop-master:~$ sudo apt-get install hadoop-hive hadoop-hive-metastorehadoop-hive-server
dongguo@hadoop-master:~$ sudo mkdir -p /var/lock/subsys
dongguo@hadoop-master:~$ sudo chown hive:hive /var/lock/subsys 3.创建Hive所需的本地目录
dongguo@hadoop-master:~$ sudo mkdir -p /var/run/hive/
dongguo@hadoop-master:~$ sudo chown hive:hive /var/run/hive/
4.启动Hive服务
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-hive-metastore start
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-hive-server start 5.安装MySQLJDBC Connector
由于Hive需要MySQL数据库作为元数据的存储,而Hive本身是由Java开发的,因此需要安装JDBC。
dongguo@hadoop-master:~$ wget 'http://cdn.mysql.com/Downloads/Connector-J/mysql-connector-java-5.1.22.tar.gz'
dongguo@hadoop-master:~$ tar xzf mysql-connector-java-5.1.22.tar.gz
dongguo@hadoop-master:~$ sudo cpmysql-connector-java-5.1.22/mysql-connector-java-5.1.22-bin.jar/usr/lib/hive/lib/ 6.安装MySQL
dongguo@hadoop-master:~$ sudo apt-get install mysql-server
在弹出的界面中设置MySQL root密码为hadoophive 7.创建数据库并赋权
dongguo@hadoop-master:~$ mysql -uroot -phadoophive
- mysql> CREATE DATABASE metastore;
- mysql> USE metastore;
- mysql> SOURCE /usr/lib/hive/scripts/metastore/upgrade/mysql/hive-schema-0.7.0.mysql.sql;
-
- mysql> CREATE USER 'hiveuser'@'%' IDENTIFIED BY 'password';
- mysql> GRANT SELECT,INSERT,UPDATE,DELETE ON metastore.* TO 'hiveuser'@'%';
- mysql> REVOKE ALTER,CREATE ON metastore.* FROM 'hiveuser'@'%';
- mysql> CREATE USER 'hiveuser'@'localhost' IDENTIFIED BY 'password';
- mysql> GRANT SELECT,INSERT,UPDATE,DELETE ON metastore.* TO 'hiveuser'@'localhost';
- mysql> REVOKE ALTER,CREATE ON metastore.* FROM 'hiveuser'@'localhost';
复制代码
8.配置hive-site.xml
dongguo@hadoop-master:~$ sudo vim /etc/hive/conf/hive-site.xml
- <?xml version="1.0"?>
- <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
-
- <configuration>
-
- <!-- Hive Configuration can either be stored in this file or in the hadoop configuration files -->
- <!-- that are implied by Hadoop setup variables. -->
- <!-- Aside from Hadoop setup variables - this file is provided as a convenience so that Hive -->
- <!-- users do not have to edit hadoop configuration files (that may be managed as a centralized -->
- <!-- resource). -->
-
- <!-- Hive Execution Parameters -->
-
- <property>
- <name>javax.jdo.option.ConnectionURL</name>
- <value>jdbc:mysql://localhost:3306/metastore</value>
- </property>
-
- <property>
- <name>javax.jdo.option.ConnectionDriverName</name>
- <value>com.mysql.jdbc.Driver</value>
- </property>
-
- <property>
- <name>javax.jdo.option.ConnectionUserName</name>
- <value>hiveuser</value>
- </property>
-
- <property>
- <name>javax.jdo.option.ConnectionPassword</name>
- <value>password</value>
- </property>
-
- <property>
- <name>datanucleus.autoCreateSchema</name>
- <value>false</value>
- </property>
-
- <property>
- <name>datanucleus.fixedDatastore</name>
- <value>true</value>
- </property>
-
- <property>
- <name>hive.stats.autogather</name>
- <value>false</value>
- </property>
-
- <property>
- <name>hive.aux.jars.path</name>
- <value>file:///usr/lib/hive/lib/hive-hbase-handler-0.7.1-cdh3u3.jar,file:///usr/lib/hbase/hbase-0.90.4-cdh3u3.jar,file:///usr/lib/zookeeper/zookeeper-3.3.4-cdh3u3.jar</value>
- </property>
-
- <property>
- <name>hbase.zookeeper.quorum</name>
- <value>hadoop-master,hadoop-node-1,hadoop-node-2</value>
- </property>
-
- </configuration>
复制代码
确保hbase与zookeeper相关的jar包能被hive获取:
dongguo@hadoop-master:~$ sudo cp /usr/lib/hbase/hbase-0.90.4-cdh3u3.jar/usr/lib/hive/lib/
dongguo@hadoop-master:~$ sudo cp /usr/lib/zookeeper/zookeeper-3.3.4-cdh3u3.jar/usr/lib/hive/lib/ 9.测试通过Hive执行Job并关联HBase中的数据
由于Hive降低了学习成本,无需使用Java进行Map/Reduce编程任务即可使用Hadoop的集群进行运算,并且与HBase能够有效的集成,因此我们可以直接在下面通过Hive实现对Hadoop集群的MapReduce并行计算的调用,体验Hadoop的魅力。 9.1启动Hive
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-hive-server restart
dongguo@hadoop-master:~$ sudo /etc/init.d/hadoop-hive-metastore restart 9.2创建Hive所需的HDFS目录
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -mkdir /user/hive
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chown hive /user/hive
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -mkdir /tmp
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chmod 777 /tmp
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chmod o+t /tmp
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -mkdir /data
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chown hdfs /data
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chmod 777 /data
dongguo@hadoop-master:~$ sudo -u hdfs hadoop fs -chmod o+t /data 9.3创建测试数据
dongguo@hadoop-master:~$ sudo -u hive vim /tmp/kv1.txt
- www.baidu.com
- www.google.com
- www.sina.com.cn
- www.163.com
- heylinx.com
复制代码
注:以TAB作为分隔符,而不是空格,后面我们Load数据的时候会用到。 9.4通过HiveShell进行操作
dongguo@hadoop-master:~$ sudo -u hive hive
- Hive history file=/tmp/hive/hive_job_log_hive_201210090516_1966038691.txt
- hive>
-
- #创建HBase识别的数据库
- hive> CREATE TABLE hbase_table_1(key int, value string)
- > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
- > WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val")
- > TBLPROPERTIES ("hbase.table.name" = "xyz");
- OK
- Time taken: 17.919 seconds
- hive>
-
- #hbase.table.name 定义在hbase的table名称
- #hbase.columns.mapping 定义在hbase的列族
-
- #使用SQL导入数据,新建hive的数据表
- hive> CREATE TABLE IF NOT EXISTS pokes (
- > foo INT,
- > bar STRING
- > )ROW FORMAT DELIMITED
- > FIELDS TERMINATED BY "\t"
- > LINES TERMINATED BY "\n";
- OK
- Time taken: 2.883 seconds
- hive>
-
- #选择制表符(\t)做分隔符,换行符(\n)做换行符。
-
- #批量插入数据
- hive> LOAD DATA LOCAL INPATH '/tmp/kv1.txt' OVERWRITE INTO TABLE pokes;
- Copying data from file:/tmp/kv1.txt
- Copying file: file:/tmp/kv1.txt
- Loading data to table default.pokes
- Deleted hdfs://hadoop-master/user/hive/warehouse/pokes
- OK
- Time taken: 4.361 seconds
- hive> select * from pokes;
- OK
- 1 www.baidu.com
- 2 www.google.com
- 3 www.sina.com.cn
- 4 www.163.com
- 5 heylinux.com
- Time taken: 0.641 seconds
-
- #使用SQL导入hbase_table_1
- hive> INSERT OVERWRITE TABLE hbase_table_1 SELECT * FROM pokes WHERE foo=5;
- Total MapReduce jobs = 1
- Launching Job 1 out of 1
- Number of reduce tasks is set to 0 since there's no reduce operator
- Starting Job = job_201210091928_0001, Tracking URL = http://hadoop-master:50030/jobdetails.jsp?jobid=job_201210091928_0001
- Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=hdfs://hadoop-master:8021 -kill job_201210091928_0001
- 2012-10-09 19:43:25,233 Stage-0 map = 0%, reduce = 0%
- 2012-10-09 19:43:31,323 Stage-0 map = 100%, reduce = 0%
- 2012-10-09 19:43:34,348 Stage-0 map = 100%, reduce = 100%
- Ended Job = job_201210091928_0001
- 1 Rows loaded to hbase_table_1
- OK
- Time taken: 17.325 seconds
- hive>
- #可以看出Hive自动调用了Hadoop的Map/Reduce来进行并行计算,最后将ID为5的一条数据插入到了HBase中。
-
- #查看数据
- hive> select * from pokes;
- OK
- 1 www.baidu.com
- 2 www.google.com
- 3 www.sina.com.cn
- 4 www.163.com
- 5 heylinux.com
- Time taken: 0.874 seconds
- hive> select * from hbase_table_1;
- OK
- 5 heylinux.com
- Time taken: 1.174 seconds
- hive>
-
- #下面测试与Flume的集成,即直接LOAD之前Flume收集在HDFS中的数据。
- hive> LOAD DATA INPATH '/flume/rawdata/2012/10/10/0252/rawdata-1-a*' INTO TABLE pokes;
- Loading data to table default.pokes
- OK
- Time taken: 1.65 seconds
- hive> select * from pokes;
- OK
- 1 www.baidu.com
- 2 www.google.com
- 3 www.sina.com.cn
- 4 www.163.com
- 5 heylinux.com
- 100 hadoop-node-1-rawdata-1-value1
- 101 hadoop-node-1-rawdata-1-value2
- 102 hadoop-node-1-rawdata-1-value3
- 103 hadoop-node-1-rawdata-1-value4
- 104 hadoop-node-1-rawdata-1-value5
- 105 hadoop-node-1-rawdata-1-value6
- 106 hadoop-node-1-rawdata-1-value7
- Time taken: 0.623 seconds
- hive>
-
- #可以看到通过Flume收集到HDFS中的数据我们也成功的LOAD了。
复制代码
9.5登录Hbase查看刚刚写入的数据
dongguo@hadoop-master:~$ sudo -u hdfs hbase shell
- HBase Shell; enter 'help' for list of supported commands.
- Type "exit" to leave the HBase Shell
- Version 0.90.4-cdh3u3, r, Thu Jan 26 10:13:36 PST 2012
-
- hbase(main):001:0> describe 'xyz'
- DESCRIPTION ENABLED
- {NAME => 'xyz', FAMILIES => [{NAME => 'cf1', BLOOMFILTER => 'NONE',
- REPLICATION_SCOPE => '0', COMPRESSION => 'NONE', VERSI true ONS => '3',
- TTL => '2147483647', BLOCKSIZE => '65536', IN_MEMORY => 'false',
- BLOCKCACHE => 'true'}]} 1 row(s) in 0.9640 seconds
-
- hbase(main):002:0> scan 'xyz'
- ROW COLUMN+CELL
- 5 column=cf1:val, timestamp=1349783010142, value=heylinux.com
- 1 row(s) in 0.0790 seconds
-
- hbase(main):003:0>
-
- #这时在Hive中可以看到刚才在Hbase中插入的数据了,即值为heylinux.com的一条记录。
-
- #测试一下在Hbase中直接插入数据。
- hbase(main):004:0> put 'xyz','6','cf1:val','www.360buy.com'
- 0 row(s) in 0.0690 seconds
-
- hbase(main):005:0> scan 'xyz'
- ROW COLUMN+CELL
- 5 column=cf1:val, timestamp=1349783010142, value=heylinux.com
- 6 column=cf1:val, timestamp=1349783457396, value=www.360buy.com
- 2 row(s) in 0.0320 seconds
-
- hbase(main):006:0>
复制代码
9.6通过MapReduceAdministration查看刚刚MapReduce的执行情况
通过上面执行“使用SQL导入hbase_table_1”时的输出可以看出Hive自动调用了Hadoop的MapReduce来进行并行计算,最后将ID为5的一条数据插入到了HBase中。
具体结果如何呢,我们可以通过MapReduceAdministration的WEB界面进行查看: http://10.6.1.150:50030
可以清楚的看到,Job已经成功的完成了。 至此,我们已经完成的整个Hadoop集群架构的搭建,并且通过Hive方便的通过SQL方式实现了对Hadoop的MapReduce并行计算的调用。 在此,让我们再一次用 Hadoop集群实践之 (0) 完整架构设计中的图表来回顾一下整个架构与数据流的逻辑, 并开始我们的Hadoop之旅!
|