Ubuntu 14.04下单节点Ceph安装
问题导读1、一个ceph cluster至少需要哪些节点?
2、ceph和cinder有何不同?
3、Ceph如何与Cinder集成?
static/image/hrline/4.gif
Ceph理论
注意点:
a, 一个ceph cluster至少需要1个mon节点和2个osd节点才能达到active + clean状态(故osd pool default size得>=2),meta节点只有运行ceph文件系统时才需要。
所以如果只有一个单节点的话,需要在ceph deploy new命令之后紧接着执行下列命令修改ceph.conf配置:
echo "osd crush chooseleaf type = 0" >> ceph.conf
echo "osd pool default size = 1" >> ceph.conf
osd crush chooseleaf type参数很重要,解释见:https://ceph.com/docs/master/rados/configuration/ceph-conf/
b, 多个网卡的话,可在ceph.conf的段中添加public network = {cidr}参数
c, 一个osd块设备最好大于5G,不然创建日志系统时会空间太小, 或修改:
echo "osd journal size = 100" >> ceph.conf
d, 测试时不想那么涉及权限那么麻烦,可以
echo "auth cluster required = none" >> ceph.conf
echo "auth service required = none" >> ceph.conf
echo "auth client required = none" >> ceph.conf
环境准备
单节点node1上同时安装osd(一块块设备/dev/ceph-volumes/lv-ceph0),mds, mon, client与admin。
1, 确保/etc/hosts
127.0.0.1 localhost
192.168.99.116 node1
2, 确保安装ceph-deply的机器和其它所有节点的ssh免密码访问(ssh-keygen && ssh-copy-id othernode)
安装步骤(注意,下面所有的操作均在admin节点进行)
1, 准备两块块设备(块设备可以是硬盘,也可以是LVM卷),我们这里使用文件裸设备模拟
dd if=/dev/zero of=/bak/images/ceph-volumes.img bs=1M count=4096 oflag=direct
sgdisk -g --clear /bak/images/ceph-volumes.img
sudo vgcreate ceph-volumes $(sudo losetup --show -f /bak/images/ceph-volumes.img)
sudo lvcreate -L2G -nceph0 ceph-volumes
sudo lvcreate -L2G -nceph1 ceph-volumes
sudo mkfs.xfs -f /dev/ceph-volumes/ceph0
sudo mkfs.xfs -f /dev/ceph-volumes/ceph1
mkdir -p /srv/ceph/{osd0,osd1,mon0,mds0}
sudo mount /dev/ceph-volumes/ceph0 /srv/ceph/osd0
sudo mount /dev/ceph-volumes/ceph1 /srv/ceph/osd1
若想直接使用裸设备的话,直接用losetup加载即可: sudo losetup --show -f /bak/images/ceph-volumes.img
2, 安装ceph-deploy
sudo apt-get install ceph ceph-deploy
3, 找一个工作目录创建集群, ceph-deploy new {ceph-node}
mkdir ceph-cluster
cd /bak/work/ceph/ceph-cluster
ceph-deploy new node1
如果只有一个节点,还需要执行:
echo "osd crush chooseleaf type = 0" >> ceph.conf
echo "osd pool default size = 1" >> ceph.conf
echo "osd journal size = 100" >> ceph.conf
4, 安装Ceph基本库(ceph, ceph-common, ceph-fs-common, ceph-mds, gdisk), ceph-deploy install {ceph-node}[{ceph-node} ...]
ceph-deploy install node1#如果是多节点,就将节点都列在后面
5, 增加一个集群监视器, ceph-deploy mon create {ceph-node}
ceph-deploy mon create node1
6, 收集远程节点上的密钥到当前文件夹, ceph-deploy gatherkeys {ceph-node}
ceph-deploy gatherkeys node1
7, 增加osd, ceph-deploy osd prepare {ceph-node}:/path/to/directory
ceph-deploy osd prepare node1:/srv/ceph/osd0
ceph-deploy osd prepare node1:/srv/ceph/osd1
8, 激活OSD, ceph-deploy osd activate {ceph-node}:/path/to/directory
<span style="background-color: rgb(255, 255, 255);"> sudo ceph-deploy osd activate node1:/srv/ceph/osd0</span>
若出现错误ceph-disk: Error: No cluster conf found,那是需要清空/src/ceph/osd0
9, 复制 admin 密钥到其他节点, 复制 ceph.conf, ceph.client.admin.keyring 到 ceph{1,2,3}:/etc/ceph
ceph-deploy admin node1
10, 验证
sudo ceph -s
sudo ceph osd tree
11, 添加新的mon
多个mon可以高可用,
1)修改/etc/ceph/ceph.conf文件,如修改:mon_initial_members = node1 node2
2) 同步配置到其它节点,ceph-deploy --overwrite-conf config push node1 node2
3) 创建mon, ceph-deploy node1 node2
12, 添加新mds, 只有文件系统只需要mds,目前官方只推荐在生产环境中使用一个 mds。
13, 作为文件系统使用直接mount即可,mount -t ceph node1:6789:/ /mnt -o name=admin,secret=<keyring>
14, 作为块设备使用:
sudo modprobe rbd
sudo ceph osd pool set data min_size 2
sudo rbd create --size 1 -p data test1 #创建1M块设备/dev/rbd/{poolname}/imagename
sudo rbd map test1 --pool data
sudo mkfs.ext4 /dev/rbd/data/test1
15, 命令操作
1)默认有3个池
$ sudo rados lspools
data
metadata
rbd
创建池:$ sudo rados mkpool nova
2)将data池的文件副本数设置为1,如果不设置这个就命令一直不返回
$ sudo ceph osd pool set data min_size 2
set pool 0 min_size to 1
3)上传一个文件,$ sudo rados put test.txt ./test.txt --pool=data
4)查看文件,
$ sudo rados -p data ls
test.txt
5)查看对象位置
$ sudo ceph osd map data test.txt
osdmap e9 pool 'data' (0) object 'test.txt' -> pg 0.8b0b6108 (0.8) -> up (, p0) acting (, p0)
$ cat /srv/ceph/osd0/current/0.8_head/test.txt__head_8B0B6108__0
test
6)添加一个新osd后,可以用“sudo ceph -w”命令看到对象在群体内迁移
16, Ceph与Cinder集成, 见:http://ceph.com/docs/master/rbd/rbd-openstack/
1) 集建池
sudo ceph osd pool create volumes 128
sudo ceph osd pool create images 128
sudo ceph osd pool set volumes min_size 2
sudo ceph osd pool set images min_size 2
2) 配置glance-api, cinder-volume, nova-compute的节点作为ceph client,因为我的全部是一台机器就不需要执行下列步骤
a, 都需要ceph.conf, ssh {openstack-server} sudo tee /etc/ceph/ceph.conf < /etc/ceph/ceph.conf
b, 都需要安装ceph client, sudo apt-get install python-ceph ceph-common
c, 为images池创建cinder用户,为images创建glance用户,并给用户赋予权限
sudo ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefixrbd_children,allow rwx pool=volumes,allow rx pool=images'
sudo ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefixrbd_children,allow rwx pool=images'
d, 为cinder和glance生成密钥(ceph.client.cinder.keyring与ceph.client.glance.keyring)
sudo chown -R hua:root /etc/ceph
ceph auth get-or-create client.glance | ssh {glance-api-server} sudo tee /etc/ceph/ceph.client.glance.keyring
ssh {glance-api-server} sudo chown hua:root /etc/ceph/ceph.client.glance.keyring
ceph auth get-or-create client.cinder | ssh {volume-server} sudo tee /etc/ceph/ceph.client.cinder.keyring
ssh {cinder-volume-server} sudo chown hua:root /etc/ceph/ceph.client.cinder.keyring
e, 配置glance, /etc/glance/glance-api.conf,注意,是追加,放在后面
default_store=rbd
rbd_store_user=glance
rbd_store_pool=images
show_image_direct_url=True
f、为nova-compute的libvirt进程也生成它所需要的ceph密钥client.cinder.key
sudo ceph auth get-key client.cinder | ssh {compute-node} tee /etc/ceph/client.cinder.key
$ sudo ceph auth get-key client.cinder | ssh node1 tee /etc/ceph/client.cinder.key
AQAXe6dTsCEkBRAA7MbJdRruSmW9XEYy/3WgQA==
$ uuidgen
e896efb2-1602-42cc-8a0c-c032831eef17
$ cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>e896efb2-1602-42cc-8a0c-c032831eef17</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
$ sudo virsh secret-define --file secret.xml
Secret e896efb2-1602-42cc-8a0c-c032831eef17 created
$ sudo virsh secret-set-value --secret e896efb2-1602-42cc-8a0c-c032831eef17 --base64 $(cat /etc/ceph/client.cinder.key)
$ rm client.cinder.key secret.xml
vi /etc/nova/nova.conf
libvirt_images_type=rbd
libvirt_images_rbd_pool=volumes
libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
rbd_secret_uuid=38960765-a537-4e78-8080-120a75222d2f
libvirt_inject_password=false
libvirt_inject_key=false
libvirt_inject_partition=-2
并重启nova-compute服务
f,配置cinder.conf并重启cinder-volume,
sudo apt-get install librados-dev librados2 librbd-dev python-ceph radosgw radosgw-agent
cinder-volume --config-file /etc/cinder/cinder.conf
volume_driver =cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
glance_api_version= 2
rbd_user = cinder
rbd_secret_uuid = e896efb2-1602-42cc-8a0c-c032831eef17
rbd_ceph_conf=/etc/ceph/ceph.conf
17, 运行一个实例
wget http://download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img
qemu-img convert -f qcow2 -O raw cirros-0.3.2-x86_64-disk.img cirros-0.3.2-x86_64-disk.raw
glance image-create --name cirros --disk-format raw --container-format ovf --file cirros-0.3.2-x86_64-disk.raw --is-public True
$ glance index
ID Name Disk Format Container Format Size
------------------------------------ ------------------------------ -------------------- -------------------- --------------
dbc2b04d-7bf7-4f78-bdc0-859a8a588122 cirros raw ovf 41126400
$ rados -p images ls
rbd_id.dbc2b04d-7bf7-4f78-bdc0-859a8a588122
cinder create --image-id dbc2b04d-7bf7-4f78-bdc0-859a8a588122 --display-name storage1 1
cinder list
18, Destroying a cluster
cd /bak/work/ceph/ceph-cluster/
ceph-deploy purge node1
ceph-deploy purgedata node1
rm -rf /bak/work/ceph/ceph-cluster/*
sudo umount /srv/ceph/osd0
sudo umount /srv/ceph/osd1
mkdir -p /srv/ceph/{osd0,mon0,mds0}
{:soso_e194:} {:soso_e185:} {:soso_e181:} {:soso_e144:} {:soso_e193:} {:soso_e181:} {:soso_e194:} {:soso_e201:} {:soso_e114:}
页:
[1]
2