本帖最后由 xioaxu790 于 2014-7-17 09:50 编辑
问题导读
1、什么是Puppet?
2、安装相应的rpm包需要注意什么?
3、如何安装配置Puppet环境?
1.puppet介绍
Puppet是Puppet Labs基于ruby语言开发的自动化系统配置工具,可以以C/S模式或独立模式运行,支持对所有UNIX及类UNIX系统的批量配置和管理,最新版本也开始支持对Windows操作系统有限的一些管理。
Puppet适用于服务器管理的整个过程,比如初始安装、配置、更新以及系统下线。
2.puppet 安装与配置
2.1服务器端安装
安装 puppet-Server
首先在服务器端和客户端配置好hostname,因为puppet是基于hostname来检测的,同时都要修改hosts文件:
Puppet 需要 Ruby 的支持,如果要查看命令行帮助的话需要额外 ruby-rdoc 这个软件包:
1. 下载puppetlabs-release-5-5.noarch.rpm
参考网址:地址
安装
- [root@service ~]# rpm -ivh puppetlabs-release-5-5.noarch.rpm
- [root@service ~]# yum install puppet-server -y
- …
- Installed:
- puppet-server.noarch 0:2.7.19-1.el5
-
- Dependency Installed:
- augeas-libs.x86_64 0:0.10.0-3 facter.x86_64 1:1.6.11-1.el5 puppet.noarch 0:2.7.19-1.el5
- ruby.x86_64 0:1.8.5-24.el5 ruby-augeas.x86_64 0:0.4.1-1 ruby-libs.x86_64 0:1.8.5-24.el5
- ruby-shadow.x86_64 0:1.4.1-7
-
- #这一步为默认安装ruby ruby-libs ruby-rdoc等软件包
-
- [root@service ~]# /etc/init.d/puppetmaster start
复制代码
关闭 iptables,关闭selinux
- [root@service ~]# /etc/init.d/iptables stop
- [root@service ~]# sed -i '/SELINUX/ s/enforcing/disabled/' /etc/selinux/config
复制代码
2.2客户端安装
安装puppet
在 client 上安装 puppet 客户端:
Puppet 需要 Ruby 的支持,如果要查看命令行帮助的话需要额外 ruby-rdoc 这个软件包:
- [root@service ~]# rpm -ivh puppetlabs-release-5-5.noarch.rpm
- [root@service ~]# yum install puppet –y
- …
- Installed:
- puppet.noarch 0:2.7.19-1.el5
- Dependency Installed:
- augeas-libs.x86_64 0:0.10.0-3 facter.x86_64 1:1.6.11-1.el5
- ruby.x86_64 0:1.8.5-24.el5 ruby-augeas.x86_64 0:0.4.1-1
- ruby-libs.x86_64 0:1.8.5-24.el5 ruby-shadow.x86_64 0:1.4.1-7
- Complete!
复制代码
安装完毕!
2.3证书申请
Puppet客户端与服务器端是通过SSL隧道通信的,客户端安装完成后,需要向服务器端申请证书:
审批证书
a:client 申请证书:
- puppetd --test --server server.puppet.com
-
- 有出现SSl session字样
-
- [root@client ~]# puppetd --test --server server.puppet.com
-
- info: Creating a new SSL key for client.puppet.com
- info: Caching certificate for ca
- info: Creating a new SSL certificate request for client.puppet.com
- info: Certificate Request fingerprint (md5): 74:34:A9:DC:F6:52:B4:96:D1:FF:D3:68:F6:E5:7B:DE
- Exiting; no certificate found and waitforcert is disabled
复制代码
b:server接受申请
- [root@server ~]# puppetca --list
- "client.puppet.com" (74:34:A9:DC:F6:52:B4:96:D1:FF:D3:68:F6:E5:7B:DE)
复制代码
显示申请的client
批准证书
- [root@server ~]# puppetca -s client.puppet.com
- notice: Signed certificate request for client.puppet.com
- notice: Removing file Puppet::SSL::CertificateRequest client.puppet.com at '
- /var/lib/puppet/ssl/ca/requests/client.puppet.com.pem'
-
- puppetca –s hostname 批准当前证书
-
- puppetca -s -a 签署所有证书请求
复制代码
c:client取回已经通过的审批证书
- [root@client ~]# puppetd --test --server server.puppet.com
- info: Caching certificate for client.puppet.com
- info: Caching certificate_revocation_list for ca
- info: Caching catalog for client.puppet.com
- info: Applying configuration version '1346237401'
- notice: Finished catalog run in 0.02 seconds
复制代码
完成
附:可能存在的错误
1) 报错
- [root@client-109 ~]# puppetd -server server.puppet.com -test
- err: Could not retrieve catalog from remote server: certificate verify failed
- warning: Not using cache on failed catalog
- err: Could not retrieve catalog; skipping run
复制代码
原因:服务端与客户端时间不同步!
2.)报错
- [root@client ~]# puppetd --server server.puppet.com --test
- err: Could not retrieve catalog from remote server: Server hostname 'server.puppet.com'
- did not match server certificate; expected one of service.puppet.com,
-
- DNS:puppet, DNS:puppet.puppet.com, DNS:service.puppet.com
复制代码
原因:服务端hostname有误,检查server端的hostname!
3).报错
- [root@client ~]# puppetd --test --server server.puppet.com
- err: Could not retrieve catalog from remote server: certificate verify failed:
-
- [self signed certificate in certificate chain for /CN=Puppet CA: server.puppet.com]
- warning: Not using cache on failed catalog
- err: Could not retrieve catalog; skipping run
- err: Could not send report: certificate verify failed:
-
- [self signed certificate in certificate chain for /CN=Puppet CA: server.puppet.com]
复制代码
原因:
如以上出现error 字样 则 删除client上的ssl文件夹
- err: Could not retrieve catalog from remote server: certificate verify failed
- warning: Not using cache on failed catalog
- err: Could not retrieve catalog; skipping run
-
- rm -rf /var/lib/puppet/ssl/
复制代码
再次循环申请证书 puppetd --test --server server.puppet.com
2.4验证puppet配置
在服务端写个例子测试一下。这个例子作用很简单,用来在客户端的/tmp目录下新建一个 test.txt 文件,内容为:hello,test!
在服务端编写代码:【服务器端不需要新建这个文件】
- vi /etc/puppet/manifests/site.pp
-
- node default {
- file {
- "/tmp/test.txt": content => "helo,test!";
- }
- }
复制代码
2.5客户端测试
在客户端执行 puppetd,运行成功后会在 /tmp 看到新生成的 test.txt:
- [root@client ~]# puppetd --test --server server.puppet.com
- #显示如下
- info: Caching catalog for client.puppet.com
- info: Applying configuration version '1346237596'
- notice: /Stage[main]//Node[default]/File[/tmp/test.txt]/ensure: defined content as '
- {md5}d7568aced6a958920309da96080e88e0'
- notice: Finished catalog run in 0.03 seconds
复制代码
最后查看
- cat /tmp/test.txt
- hello,test!
复制代码
此致puppet服务器端和客户端安装完毕,接下来就是深入的配置了。
2.6客户端设置守护进程
方法一:启动puppet后台运行
- [root@client tmp]# puppetd --server server.puppet.com --verbose --waitforcert 60
-
- 注释:--server master 指明服务器节点地址
-
- --waitforcert 连接server检查的时间间隔,60分钟
-
- --verbose 输出冗余信息(可选选项)
复制代码
方法二:得用crontab作定时同步
3.深入了解puppet
3.1环境架构图
CentOS5上Puppet安装配置 - 天空下的缘分 - 天空下的缘分
3.2服务端配置目录树
|-- fileserver.conf
|-- manifests
| |-- nodes.pp
| `-- site.pp
|-- modules #定义模块
| `-- users
| |-- file
| |-- manifests
| | |-- adduser.pp
| | |-- deluser.pp
| | |-- init.pp
| | |-- na.pp
| | `-- sa.pp
| `-- templates
| |-- caojin_authorized_keys.erb
| `-- jiaxin_authorized_keys.erb
|-- puppet.conf #主配置配置文件
3.3用户管理模块
user mofules 目录树
users
|-- file
|-- manifests
| |-- adduser.pp #添加用户类
| |-- deluser.pp #删除用户
| |-- init.pp
| |-- na.pp
| `-- sa.pp
`-- templates
|-- caojin_authorized_keys.erb #用户key
`-- jiaxin_authorized_keys.erb #用户key
adduser.pp 文件
- class linux::adduser {
- define add_user ($username=, $useruid=, $userhome=, $usershell='/bin/bash', $groups)
- {
- user
- { $username:
- uid => $useruid,
- shell => $usershell,
- groups => $groups,
- home => "/home/$userhome",
- }
- file
- { "/home/$userhome":
- owner => $useruid,
- group => $useruid,
- mode => 700,
- ensure => directory;
- }
- file
- { "/home/$userhome/.ssh":
- owner => $useruid,
- group => $useruid,
- mode => 700,
- ensure => directory,
- require => File["/home/$userhome"];
- }
- file
- { "/home/$userhome/.ssh/authorized_keys":
- owner => $useruid,
- group => $useruid,
- mode => 600,
- ensure => present,
- content => template("users/${userhome}_authorized_keys.erb"),
- require => File["/home/$userhome/.ssh"];
- }
- }
- }
复制代码
deluser.pp
- class linux::deluser
- {
- user
- {
- "caojin":
- ensure => absent,
- }
- }
复制代码
sa.pp
- import "adduser.pp"
- class linux::adduser::sa inherits linux::adduser
- {
- add_user
- {
- "jiaxin":
- useruid => 2000,
- username => jiaxin,
- userhome => "jiaxin",
- groups => $operatingsystem ? {
- Ubuntu => ["admin"],
- CentOS => ["wheel"],
- RedHat => ["wheel"],
- default => ["wheel"],
- },
- }
- }
复制代码
PUPPET中文WIKI:地址
4.使用puppet的模块化管理
puppet 模块基础
puppet模块可以导入,复用都很方便,在这里先回答下之前的两个问题:
1. 查看puppet 模块路径,可以使用如下命令:
- [root@c1.inanu.net]# puppetmasterd --configprint modulepath
-
- /etc/uppet/modules:/usr/share/puppet/modules
-
- #可以看到这两个目录是puppet 模块默认所在的目录。
复制代码
1. 要引用 puppet模块,如果模块所在上面的两个默认的路径可以使用:
import “模块名”
如果提示模块不存在,比如我在 /data/modules,那么有两种解决方法:
2.1是修改puppet.conf文件,添加目录到modulepath.举例 :
- modulepath = /data/modules:/etlc/puppet/modules
复制代码
2.2是在引用的是时候用绝对路径。
import “/data/modules/模块名”
例1:puppet 模块实例
- root@s1:/etc/puppet/modules# cd /etc/puppet/modules
-
- root@s1:/etc/puppet/modules# mkdir -p test/{manifests,files,templates}
复制代码
这三个目录说明:files目录是用来存放同步远程客户端的文件或者文件夹,manifests目录下放.pp文件,且必须要有init.pp,templates是存放的puppet 模板文件,是以.erb结尾的。
建立init.pp文件
- # /etc/puppet/modules/test/manifests/init.pp
- class test::test {
- file { "/tmp/test":
- owner => root,
- group => root,
- ensure => present,
- content => "Hello word",
- mode => "0644",
- }
- }
复制代码
在/etc/puppet/manifests/site.pp里添加:
- node "default" {
- include test::test
- }
复制代码
注:不建议这样操作,实际生产中,我会在site.pp里添加 import “nodes.pp”,然后在nodes.pp里添加上面的内容。
这样我们就建立了我们第一个puppet 模块,在到客户端 c2.inanu.net 上运行puppet查看结果:
- congpeijun@s2:/tmp$ puppet agent --server s1.ubuntu.local --test
-
- info: Caching catalog for s2.ubuntu.local
- info: Applying configuration version '1339679532'
- notice: /Stage[main]/Test/File[/tmp/test]/ensure: created
- notice: Finished catalog run in 0.05 seconds
- congpeijun@s2:/tmp$ cat test
- Hello word
复制代码
再次验证,可以看到已经成功运行,已经达到预期的效果。在/tmp/目录下生成了nanu这个文件,有个问题,不知道大家注意到没有,这里并没有import “test”模块,而直接使用了include test::test类。有兴趣的同学可以试试,再看下效果。
例2:SSH自动化管理
创建ssh模块相应的目录和文件
-
- [root@master ~]# mkdir -p /etc/puppet/modules/ssh/{manifests,templetes,files}
复制代码
前面sudo模块的时候,所有相关的设置都是在init.pp文件中,但再SSH模块中我们尝试着将配置分为init.pp,install.pp,config.pp,service.pp,params.pp。
创建配置相应文件
- [root@master ~]# touch /etc/puppet/modules/ssh/manifests/{install.pp,config.pp,service.pp}
复制代码
配置params.pp文件,该文件主要是配置模块的参数
- [root@master ~]# vim /etc/puppet/modules/ssh/manifests/params.pp
-
-
- class ssh::params {
- case $operatingsystem {
- Solaris: {
- $ssh_package_name ='openssh'
- $ssh_service_config='/etc/ssh/sshd_config'
- $ssh_service_name='sshd'
- }
- /(Ubuntu|Debian)/: {
- $ssh_package_name='openssh-server'
- $ssh_service_config='/etc/ssh/sshd_config'
- $ssh_service_name='sshd'
- }
-
- /(RedHat|CentOS|Fedora)/: {
- $ssh_package_name='openssh-server'
- $ssh_service_config='/etc/ssh/sshd_config'
- $ssh_service_name='sshd'
- }
- }
- }
-
复制代码
编辑ssh模块的init.pp文件
- [root@master ~]# vim /etc/puppet/modules/ssh/manifests/init.pp
-
- class ssh{
- include ssh::params,ssh::install,ssh::config,ssh::service
- }
复制代码
编辑install.pp
- [root@master ~]# vim /etc/puppet/modules/ssh/manifests/install.pp
-
- class ssh::install {
- package {"$ssh::params::ssh_package_name":
- ensure=>installed,
- }
- }
复制代码
编辑config.pp
- [root@master ~]# vim /etc/puppet/modules/ssh/manifests/config.pp
-
-
- class ssh::config{
- file { $ssh::params::ssh_service_config:
- ensure=>present,
- owner=>'root',
- group=>'root',
- mode=>0600,
- source=>"puppet://$puppetserver/modules/ssh/sshd_config",
- require=>Class["ssh::install"],
- notify=>Class["ssh::service"],
- }
- }
复制代码
Notify在这里是发出通知到对应的类,即如果ssh:config改变了,就notify通知ssh::service类。
编辑service.pp
- [root@master ~]# vim /etc/puppet/modules/ssh/manifests/service.pp
-
- class ssh::service{
- service{ $ssh::params::ssh_service_name:
- ensure=>running,
- hasstatus=>true,
- hasrestart=>true,
- enable=>true,
- require=>Class["ssh::config"],
- }
- }
复制代码
设置hasstatus告诉puppet该服务支持status命令,即类似service sshd status
设置hasrestart告诉puppet该服务支持restart命令,即类似service sshd restart
复制默认的sshd_config文件到模块的files目录下
[root@master ~]# cp /etc/ssh/sshd_config /etc/puppet/modules/ssh/files/
Ssh模块设置完成,下面是将该模块应用到节点上
编辑nodes.pp
- [root@master ~]# vim /etc/puppet/manifests/nodes.pp
-
- class base {
- include sudo,ssh
- }
- node 'client1.centos' {
- include base
- }
- node 'client2.centos' {
- include base
- }
复制代码
到节点上验证配置是否正确
- [root@client1 ~]# puppetd --server server.puppet.com --test
复制代码
5. puppet管理之Mcoolective
MCollective就是一个调度器,可以解决多个puppet agent同时向master提出请求造成性能,速度下降的问题;它可以根据不同的属性对节点进行分类,对不同分类执行不同的任务;它是一个控制终端,可以使用它控制客户端和服务器,因此不需要puppet agent定时运行了.
MCollective也是一种Client/Server架构,而且client和server使用Midware(中间件)进行通信,需要java以及activemq支持.
Mcollective官文说明:地址
5.1下载安装包
创建安装包下载目录
- [root@server ~]# cd /usr/local/src
复制代码
使用wget下载相应的rpm包
- wget -c http://downloads.puppetlabs.com/mcollective/tanukiwrapper-3.5.9-1.el5.x86_64.rpm
- wget -c http://downloads.puppetlabs.com/mcollective/activemq-5.5.0-1.el5.noarch.rpm
- wget -c http://downloads.puppetlabs.com/mcollective/activemq-info-provider-5.5.0-1.el5.noarch.rpm
- wget -c http://downloads.puppetlabs.com/mcollective/mcollective-1.0.1-1.el5.noarch.rpm
- wget -c http://downloads.puppetlabs.com/mcollective/mcollective-common-1.0.1-1.el5.noarch.rpm
- wget -c http://downloads.puppetlabs.com/mcollective/mcollective-client-1.0.1-1.el5.noarch.rpm
复制代码
官方下载地址:地址
5.2安装java,需要java 1.6.0以上版本
- [root@server mogongII]# java -version
-
- java version "1.6.0_22"
-
- Java(TM) SE Runtime Environment (build 1.6.0_22-b04)
-
- Java HotSpot(TM) 64-Bit Server VM (build 17.1-b03, mixed mode)
复制代码
5.3 安装相应的rpm包
注意安装顺序,是有依赖关系的
- rpm -ivh tanukiwrapper-3.5.9-1.el5.x86_64.rpm activemq-5.5.0-1.el5.noarch.rpm activemq-info-provider-5.5.0-1.el5.noarch.rpm
-
- yum -y installrubygem-stomp ##mcollective需要依赖ruby-stomp
-
- rpm -ivh mcollective-common-1.0.1-1.el5.noarch.rpm mcollective-client-1.0.1-1.el5.noarch.rpm
-
- rpm -ivh mcollective-1.0.1-1.el5.noarch.rpm
复制代码
到此安装过程结束,接下来就是Mcollective配置了。
5.4配置Mcollective
5.4.1修改activemq.xml配置文件
- vim /etc/activemq/activemq.xml
-
- 省略部分配置文件,我没有改动,只改动加粗,或者加色的部分
-
- users
- authenticationUser username="admin" password="secret" groups="mcollective,admins,everyone"/
-
- ##这个用户和密码配置mcollective时要用到。
- authenticationUser username="mcollective" password="secret" groups="mcollective,admins,everyone"/
- /users
- /simpleAuthenticationPlugin
- authorizationPlugin
- map
- authorizationMap
- authorizationEntries
- authorizationEntry queue="" write="admins" read="admins" admin="admins" /
- authorizationEntry topic="" write="admins" read="admins" admin="admins" /
- authorizationEntry topic="ActiveMQ.Advisory." read="everyone" write="everyone" admin="everyone"/
- /authorizationEntries
- /authorizationMap
- /map
- /authorizationPlugin
- /plugins
- transportConnectors
- transportConnector name="openwire" uri="tcp://0.0.0.0:6166"/
- transportConnector name="stomp" uri="stomp://0.0.0.0:6163"/
- /transportConnectors
复制代码
请注意红色的部分,要改成stomp,后面的url是监听的IP地址和端口。
5.4.2修改/etc/mcollective/server.cfg,这个是mcollective的主配置文件。
- cat /etc/mcollective/server.cfg
- topicprefix = /topic/mcollective
- libdir = /usr/libexec/mcollective ##以后添加插件都要放到这个目录下.
- logfile = /var/log/mcollective.log
- loglevel = info
- daemonize = 1
- # Plugins
- securityprovider = psk
- plugin.psk = unset
- connector = stomp
- plugin.stomp.host = localhost ##如作为客户端的话。需要修改这里,指向mcoolective的IP。
- plugin.stomp.port = 6163
- plugin.stomp.user = mcollective ##这个是activemq.xml里的用户名
- plugin.stomp.password = secret ##这个是activemq.xml里定义的密码
- # Facts
- factsource = yaml
- plugin.yaml = /etc/mcollective/facts.yaml
复制代码
Mcollective客户端配置
1.客户端需要安装mcollective相关软件,安装过程如上面,只需要配置server.cfg,client.cfg。
- cat /etc/mcollective/server.cfg
- 省略部分配置
- connector = stomp
- plugin.stomp.host = 192.168.2.220 ##这里指向mcollective服务端的IP地址
- plugin.stomp.port = 6163
- plugin.stomp.user = mcollective ## 用于连接服务端的用户名
- plugin.stomp.password = secret ## 用于连接服务羰的密码
- 如一台机器又作服务端,又作为客户端,需要配置client.cfg,只需要指定server的IP和连接的用户名和密码,
- 基本上和server.cfg一样。在测试中,我client.cfg也指定了服务端。
- 配置好主服务端和客户端,我们开启相应的服务,在服务端
- #/etc/init.d/activemq start
- #/etc/init.d/mcollectivestart
- 客户端只需要启动
- #/etc/init.d/mcollective start
- 查看日志,如没有异常,接下来就是进行测试。
复制代码
5.5 Mcollective 测试
在服务端执行相应命令,进行功能测试。
- [root@primarylb mcollective]# mc-ping
- primarylb.test.com time=57.94 ms
- backuplb.test.com time=95.19 ms
复制代码
- [root@primarylb mcollective]# mc-find-hosts
- primarylb.test.com
- backuplb.test.com
复制代码
- [root@primarylb 1.8]# mc-controller stats
- Determining the amount of hosts matching filter for 2 seconds .... 2
- primarylb.test.com total=46 replies=22 valid=46 invalid=0 filtered=18 passed=28
- backuplb.test.com total=23 replies=13 valid=23 invalid=0 filtered=5 passed=18
- ---- mcollectived controller summary ----
- Nodes: 2 / 2
- Start Time: Wed Oct 12 15:58:04 +0800 2011
- Discovery Time: 2002.05ms
- Agent Time: 52.90ms
- Total Time: 2054.95ms
复制代码
- [root@primarylb mcollective]# mc-find-hosts -A mc-rpc rpcutil agent_inventory -I primarylb.test.com
- Determining the amount of hosts matching filter for 2 seconds .... 1
- * [ ============================================================ ] 1 / 1
- primarylb.test.com
- Agents:
- [{:agent="discovery",
- :author="R.I.Pienaar rip@devco.net",
- :license="Apache License, Version 2"},
- {:url="http://www.puppetlabs.com/mcollective",
- :name="filemgr",
- :timeout=5,
- :description="File Manager",
- :agent="filemgr",
- :author="Mike Pountney mike.pountney@gmail.com",
- :version="0.3",
- :license="Apache 2"},
- {:url="http://marionette-collective.org/",
- :name="Utilities and Helpers for SimpleRPC Agents",
- :timeout=10,
- :description=
- "General helpful actions that expose stats and internals to SimpleRPC clients",
- :agent="rpcutil",
- :author="R.I.Pienaar rip@devco.net",
- :version="1.0",
- :license="Apache License, Version 2.0"}]
- Finished processing 1 / 1 hosts in 55.68 ms
复制代码
- [root@primarylb mcollective]# mc-rpc filemgr status file=/etc/puppet/puppet.conf
- ## 要安装filemgr插件
- Determining the amount of hosts matching filter for 2 seconds .... 1
- * [ ============================================================ ] 1 / 1
- primarylb.test.com
- Type: file
- Access time:
- Wed Oct 12 14:10:52 +0800 2011
- Status: present
- Present: 1
- Mode: 100644
- Modification time:
- Tue Oct 11 17:20:08 +0800 2011
- MD5: 4b520679f63967447dacb00c87ac8c3f
- Change time:
- Tue Oct 11 17:20:08 +0800 2011
- Modification time: 1318324808
- Owner: 0
- Change time: 1318324808
- Size: 1574
- Group: 0
- Name: /etc/puppet/puppet.conf
- Access time: 1318399852
- Finished processing 1 / 1 hosts in 79.03 ms
复制代码
我安装了部分插件,更多插件可以参阅地址
下一篇:CentOS5上Puppet安装配置(二)
|