ambari hadoop统一部署
本帖最后由 howtodown 于 2014-9-1 14:33 编辑问题导读:
1.安装ambari需要哪些步骤?
2.ambari对hadoop做了什么?
static/image/hrline/1.gif
1、基本工具yum、scp、curl、wget、pdsh、ssh
2、环境准备2.1、系统环境Centos6.564bitAmbari1.4.3.38
2.2、ambari server 与 ambari agent 服务器root ssh无密互访Ambariserver服务器Ssh-keygen生产私钥公钥 id_rsa id_rsa.pub Ambariagent服务器将ambari server 服上root生成的id_rsa.pub上传到各个集群机器上。catid_rsa.pub >> authorized_keysAmbarserversshroot@ambariagent
2.3、所有集群机器时间同步安装ntp服务
2.4、所有集群机器关闭selinuxsetenforce 0
2.5、所有集群机器关闭防火墙/etc/init.d/iptablesstop
2.6、所有集群机器centos关闭packagekitvim/etc/yum/pluginconf.d/refresh-packagekit.confenabled=0
3、安装准备注:本节只需要在ambariserver服务器上做
3.1、安装源wget http://public-repo-1.hortonworks ... /1.x/GA/ambari.repocp ambari.repo /etc/yum.repos.d这个源可能会非常慢,如果慢可以部署本地源(以下为可选,一般ambari源所需资源少,不需要部署本地源),步奏如下:
1、下载打包好的ambari源百度云盘2、在ambari-server机器上部署源,将上面下载的文件解压到如/var/www/html/目录下3、修改ambari.repo文件如下,注意绿色部分即可name=Ambari 1.xbaseurl=file:///var/www/html/ambari/gpgcheck=0gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkinsenabled=1priority=1
name=Hortonworks Data Platform Utils Version - HDP-UTILS-1.1.0.16baseurl=file:///var/www/html/ambari/gpgcheck=0gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkinsenabled=1priority=1
name=ambari-1.x - Updatesbaseurl=file:///var/www/html/ambari/gpgcheck=0gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkinsenabled=1priority=1
3.2、Installthe epel repositoryyuminstall epel-releaseyumrepolist
3.3、安装ambariserveryuminstall ambari-server
3.4、设置ambariserverambari-serversetup
一路按确定即可,ambari-server会用到数据库,自行选择默认使用PostgreSQL,如果你选择使用mysql还需要把jdbc驱动放到/usr/lib/ambari-server目录下,会自动下载jdk-6u31-linux-x64.bin到/var/lib/ambari-server/resources你可以可以自己下载
安装好之后 启动ambari-serverambari-serverstart
如果启动失败主义看几个日志文件/var/log/ambari-server查看ambari-server状态ambari-serverstatus
注:如果不自己使用自己创建的源,下一步奏略过ambari-server会自己搭建一个web服务器根目录为/usr/lib/ambari-server/web端口号为8080我们可以通过 http://ambari-server-hostname:8080/访问到,我们要利用起来这个web服务器(也可以使用自己已有服务器)部署我们的ambari 和 hdp源,步骤如下:
1、将3.1中/var/www/html/ambari/复制到/usr/lib/ambari-server/webcp-r /var/www/html/ambari/ /usr/lib/ambari-server/web
2、同时修改ambari.reponame=Ambari 1.xbaseurl=http://ambari-server-hostname:8080/ambari/gpgcheck=0gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkinsenabled=1priority=1
name=Hortonworks Data Platform Utils Version - HDP-UTILS-1.1.0.16baseurl=http://ambari-server-hostname:8080/ambari/gpgcheck=0gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkinsenabled=1priority=1
name=ambari-1.x - Updatesbaseurl=http://ambari-server-hostname:8080/ambari/gpgcheck=0gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkinsenabled=1priority=1
3、创建HDP源将HDP源(我自己打包好的源)百度云盘下载并解压到/usr/lib/ambari-server/web下可以看到一个hdp文件夹
4、修改repoinfo.xml文件cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/repos 修改如下部分,我使用的是centos6.5所以修改对应的。
<os type="centos6"> <repo> <baseurl>http://ambari-server-hostname:8080/hdp/</baseurl> <repoid>HDP-2.0.6</repoid> <reponame>HDP</reponame> </repo></os><os type="oraclelinux6"> <repo> <baseurl>http://ambari-server-hostname:8080/hdp/</baseurl> <repoid>HDP-2.0.6</repoid> <reponame>HDP</reponame> </repo></os><ostype="redhat6"> <repo> <baseurl>http://ambari-server-hostname:8080/hdp/</baseurl> <repoid>HDP-2.0.6</repoid> <reponame>HDP</reponame> </repo></os>在安装过程中,ambari会在集群机器上创建一个HDP.repo的源文件,文件的内容就是我们这里修改的相关信息。
4、安装
浏览器访问http://ambari-server-hostname:8080/按照步奏一步步走在这一步时注意,选择自己对应得版本,我使用的是2.0.6前面章节中我也配置的是2.0.6版本,其他版本类似修改。
在安装中可能会出现多次失败,如果retry很多次还这样就要清理各个机器上的相关类库,用户,目录,配置等信息。所以建议最好在一个新安装的纯净系统上使用。
5、重新安装
注:以下来自网络,我在安装中也是这样做的,有些内容可能与实际不同,主要就是删除相关包、用户、以及配置
1、停止ambari 所有集群机器 ambari-agent stop Ambari-server机器 ambari-server stop
2、删除安装包#用yum list installed | grep HDP来检查安装的hadoop相关的包yum remove -y sqoop.noarchyum remove -y lzo-devel.x86_64yum remove -y hadoop-libhdfs.x86_64yum remove -y rrdtool.x86_64yum remove -y hbase.noarchyum remove -y pig.noarchyum remove -y lzo.x86_64yum remove -y ambari-log4j.noarchyum remove -y oozie.noarchyum remove -y oozie-client.noarchyum remove -y gweb.noarchyum remove -y snappy-devel.x86_64yum remove -y hcatalog.noarchyum remove -y python-rrdtool.x86_64yum remove -y nagios.x86_64yum remove -y webhcat-tar-pig.noarch yum remove -y snappy.x86_64yum remove -y libconfuse.x86_64 yum remove -y webhcat-tar-hive.noarch yum remove -y ganglia-gmetad.x86_64yum remove -y extjs.noarchyum remove -y hive.noarchyum remove -y hadoop-lzo.x86_64yum remove -y hadoop-lzo-native.x86_64yum remove -y hadoop-native.x86_64yum remove -y hadoop-pipes.x86_64yum remove -y nagios-plugins.x86_64yum remove -y hadoop.x86_64yum remove -y zookeeper.noarch yum remove -y hadoop-sbin.x86_64yum remove -y ganglia-gmond.x86_64yum remove -y libganglia.x86_64yum remove -y perl-rrdtool.x86_64yum remove -y epel-release.noarchyum remove -y compat-readline5*yum remove -y fping.x86_64yum remove -y perl-Crypt-DES.x86_64yum remove -y exim.x86_64yum remove -y ganglia-web.noarchyum remove -y perl-Digest-HMAC.noarchyum remove -y perl-Digest-SHA1.x86_64
3.删除用户userdel nagios userdel hive userdel ambari-qa userdel hbase userdel oozie userdel hcat userdel mapred userdel hdfs userdel rrdcached userdel zookeeper userdel sqoopuserdel puppet
4.删除快捷方式cd /etc/alternativesrm -rf hadoop-etc rm -rf zookeeper-conf rm -rf hbase-conf rm -rf hadoop-log rm -rf hadoop-lib rm -rf hadoop-default rm -rf oozie-conf rm -rf hcatalog-conf rm -rf hive-conf rm -rf hadoop-man rm -rf sqoop-conf rm -rf hadoop-conf
5.删除文件夹rm -rf /var/lib/pgsqlrm -rf /hadooprm -rf /etc/hadoop rm -rf /etc/hbase rm -rf /etc/hcatalog rm -rf /etc/hive rm -rf /etc/ganglia rm -rf /etc/nagios rm -rf /etc/oozie rm -rf /etc/sqoop rm -rf /etc/zookeeper rm -rf /var/run/hadoop rm -rf /var/run/hbase rm -rf /var/run/hive rm -rf /var/run/ganglia rm -rf /var/run/nagios rm -rf /var/run/oozierm -rf /var/run/zookeeperrm -rf /var/log/hadoop rm -rf /var/log/hbase rm -rf /var/log/hive rm -rf /var/log/nagios rm -rf /var/log/oozie rm -rf /var/log/zookeeper rm -rf /usr/lib/hadooprm -rf /usr/lib/hbase rm -rf /usr/lib/hcatalog rm -rf /usr/lib/hive rm -rf /usr/lib/oozie rm -rf /usr/lib/sqoop rm -rf /usr/lib/zookeeper rm -rf /var/lib/hive rm -rf /var/lib/ganglia rm -rf /var/lib/oozie rm -rf /var/lib/zookeeper rm -rf /var/tmp/oozie rm -rf /tmp/hive rm -rf /tmp/nagios rm -rf /tmp/ambari-qa rm -rf /tmp/sqoop-ambari-qa rm -rf /var/nagios rm -rf /hadoop/oozie rm -rf /hadoop/zookeeper rm -rf /hadoop/mapred rm -rf /hadoop/hdfs rm -rf /tmp/hadoop-hive rm -rf /tmp/hadoop-nagios rm -rf /tmp/hadoop-hcat rm -rf /tmp/hadoop-ambari-qa rm -rf /tmp/hsperfdata_hbase rm -rf /tmp/hsperfdata_hive rm -rf /tmp/hsperfdata_nagios rm -rf /tmp/hsperfdata_oozie rm -rf /tmp/hsperfdata_zookeeper rm -rf /tmp/hsperfdata_mapred rm -rf /tmp/hsperfdata_hdfs rm -rf /tmp/hsperfdata_hcat rm -rf /tmp/hsperfdata_ambari-qa
6.删除ambari包#采用这句命令来检查yum list installed | grep ambariyum remove -y ambari-*yum remove -y postgresqlrm -rf /var/lib/ambari*rm -rf /var/log/ambari*rm -rf /etc/ambari*
7.删除HDP.repo和amabari.repocd /etc/yum.repos.d/rm -rf HDP*rm -rf ambari*
页:
[1]