Openstack 中, 创建云主机时, 需要定义 flavor 类型, flavor 定义了每个云主机的硬件类型, 包括 cpu 个数, 内存大小, 硬盘大小(根盘), 作为通用类型, 我们不会把硬盘容量设定过大, 那么对于数据库用户或其他需要使用大容量磁盘的用户是不合适的.
特点:
云主机与云硬盘, 需要独立地创建, 独立管理
云硬盘需要通过命令指定云主机进行连接方可使用
云主机被删除, 云硬盘数据依旧存在
每个云硬盘只可以与一个云主机进行连接, 不可并发同时访问
云硬盘数据永久保存(除非手动删除)
云主机硬盘创建在计算节点上, 云硬盘创建在共享存储中(ceph)
Openstack 中, 利用 cinder 命令对云硬盘进行增删改操作, 使用 nova命令对云主机中对云硬盘进行管理
Cinder 类型
[mw_shl_code=bash,true][root@hh-yun-puppet-129021 ~(keystone_admin)]# cinder type-list
+--------------------------------------+----------------+
| ID | Name |
+--------------------------------------+----------------+
| 45fdd68a-ca0f-453c-bd10-17e826a1105e | CEPH-SATA |
| 4a323411-cd36-4282-b29d-c2f2d24191e9 | GLUSTERFS-SSD |
| 919dc90f-c559-44c9-bc68-2d1dfbe3cf8a | GLUSTERFS-SATA |
+--------------------------------------+----------------+[/mw_shl_code]
当前生产环境下, 只使用到 CEPH-SATA 类型, GLUSTERFS 类型都已启用
在利用 cinder 命令创建云硬盘时, 我们需要指定 cinder type 的命名, 就是 CEPH-SATA
Cinder 后端服务
[mw_shl_code=bash,true][root@hh-yun-puppet-129021 ~(keystone_admin)]# cinder service-list
+------------------+-------------------------------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+-------------------------------------+------+---------+-------+----------------------------+-----------------+
| cinder-backup | hh-yun-cinder.vclound.com | nova | enabled | up | 2016-03-15T07:04:21.000000 | None |
| cinder-scheduler | hh-yun-cinder.vclound.com | nova | enabled | up | 2016-03-15T07:04:15.000000 | None |
| cinder-volume | hh-yun-cinder.vclound.com@CEPH_SATA | nova | enabled | up | 2016-03-15T07:04:19.000000 | None |
+------------------+-------------------------------------+------+---------+-------+----------------------------+-----------------+[/mw_shl_code]
进程 | 作用 | Cinder-backup | 用于创建快照等操作 | Cinder-scheduler | Cinder 调度器, 接受 api 指令 | Cinder-volume | Cinder api, 接受云盘增删改指令 |
请留意
cinder-volume 进程的host 定义, cinder-volume 服务在收到指定, 对云硬盘进行增删改操作时候, 需要找到 host (hh-yun-cinder.vclound.com@CEPH_SATA ), 配置文件(/etc/cinder/cinder.conf)中已完成定义.
创建云硬盘
[mw_shl_code=bash,true]cinder create --display-name 'user for db' --volume-type CEPH-SATA 20
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2016-03-14T09:24:52.000000 |
| display_description | None |
| display_name | user for db |
| encrypted | False |
| id | d1e1194f-d33b-4169-9ce2-6644fa111d89 |
| metadata | {u'readonly': u'False'} |
| os-vol-host-attr:host | hh-yun-cinder.vclound.com@CEPH_SATA |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | bb0b51d166254dc99bc7462c0ac002ff |
| size | 20 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| user_id | 226e71f1c1aa4bae85485d1d17b6f0ae |
| volume_type | CEPH-SATA |
+--------------------------------+--------------------------------------+[/mw_shl_code]
参数 | 描述 | Display-name | 用于定义该云盘的显示名称 | Volume-type | 用于定义使用哪个 cinder 后端进行云盘创建 | 20 | 代表创建一个 20GB 的云盘 | 历史: 云平台早期同时使用了clusterfs 后端 ceph 后端两种, 后来过度只使用 ceph 后端
[mw_shl_code=bash,true][root@hh-yun-puppet-129021 ~(keystone_admin)]# cinder list --display-name 'user for db'
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| d1e1194f-d33b-4169-9ce2-6644fa111d89 | available | user for db | 20 | CEPH-SATA | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+[/mw_shl_code]
挂载云盘
参考主机状态
[mw_shl_code=bash,true][root@hh-yun-puppet-129021 ~(keystone_admin)]# nova show 7dcfee9a-9338-489b-a335-215b8441d67b | grep -E 'volumes_attached|vm_state|name'
| OS-EXT-SRV-ATTR:hypervisor_hostname | hh-yun-compute-130214.vclound.com |
| OS-EXT-SRV-ATTR:instance_name | instance-00017faf |
| OS-EXT-STS:vm_state | active |
| key_name | - |
| name | terry.gz.vclound.com |
| os-extended-volumes:volumes_attached | [] |[/mw_shl_code]
利用 nova volume-attach 云主机id 云盘id 进行挂载
[mw_shl_code=bash,true]nova volume-attach 7dcfee9a-9338-489b-a335-215b8441d67b d1e1194f-d33b-4169-9ce2-6644fa111d89
+----------+--------------------------------------+
| Property | Value |
+----------+--------------------------------------+
| device | /dev/vdc |
| id | d1e1194f-d33b-4169-9ce2-6644fa111d89 |
| serverId | 7dcfee9a-9338-489b-a335-215b8441d67b |
| volumeId | d1e1194f-d33b-4169-9ce2-6644fa111d89 |
+----------+--------------------------------------+[/mw_shl_code]
挂载后, 再次参考主机状态
[mw_shl_code=bash,true][root@hh-yun-puppet-129021 ~(keystone_admin)]# nova show 7dcfee9a-9338-489b-a335-215b8441d67b | grep -E 'volumes_attached|vm_state|name'
| OS-EXT-SRV-ATTR:hypervisor_hostname | hh-yun-compute-130214.vclound.com |
| OS-EXT-SRV-ATTR:instance_name | instance-00017faf |
| OS-EXT-STS:vm_state | active |
| key_name | - |
| name | terry.gz.vclound.com |
| os-extended-volumes:volumes_attached | [{"id": "d1e1194f-d33b-4169-9ce2-6644fa111d89"}] |[/mw_shl_code]
参考 volumes_attached 状态栏中, 出现了挂载了的云盘
假如有多个云硬盘被挂载, 该状态栏将会出现多个云盘 ID
卸载云盘
[mw_shl_code=bash,true]nova volume-detach 7dcfee9a-9338-489b-a335-215b8441d67b d1e1194f-d33b-4169-9ce2-6644fa111d89
[/mw_shl_code]
数据库分析
参考下面信息 X | 云主机 | 云盘 | ID | 7dcfee9a-9338-489b-a335-215b8441d67b | d1e1194f-d33b-4169-9ce2-6644fa111d89 |
已经把云盘挂载到云主机中 /dev/vdc 设备中 云盘信息在数据库中主要体现在两个表中
1. cinder.volumes (云盘创建后, 会在该表中添加云盘记录数据)
2. nova. block_device_mapping (云盘挂载后, 会在该表中添加挂载记录) 如上文, 创建了 user for db 云盘, 参考下面记录 [mw_shl_code=bash,true]mysql> select created_at, deleted, id, user_id, project_id, host, size, mountpoint, status, attach_time, display_name, attached_host from cinder.volumes where id='d1e1194f-d33b-4169-9ce2-6644fa111d89' \G
*************************** 1. row ***************************
created_at: 2016-03-14 09:24:52
deleted: 0
id: d1e1194f-d33b-4169-9ce2-6644fa111d89
user_id: 226e71f1c1aa4bae85485d1d17b6f0ae
project_id: bb0b51d166254dc99bc7462c0ac002ff
host: hh-yun-cinder.vclound.com@CEPH_SATA
size: 20
mountpoint: NULL
status: available
attach_time: NULL
display_name: user for db
attached_host: NULL
1 row in set (0.00 sec)[/mw_shl_code]
列名 | 解释 | created_at | 云盘创建时间 | deleted | 0, 代表云盘可用, 非0, 代表云盘已经已删除 | id | 云盘 UUID | user_id | 云盘拥有者 | project_id | 云盘所属的 tenant | host | 云盘后端接口 | size | 云盘大小 (GB) | mountpoint | 挂载点 | status | deleted 代表云盘已经删除 | | in-use 云盘已经完成挂载, 可用状态 | | available 云盘空闲, 可以挂载, 删除 | | detaching 云盘卸载中 | | error 云盘错误, 需进行排错 | attaching | 云盘正在挂载 | attach_time | 云盘挂载的时间 | display_name | 云盘显示名称 |
注意, 当云盘被删除后, 上述数据中, deleted 显示非0 状态, status 显示 deleted 状态 当云盘挂载成功后, 参考下面表数据信息 [mw_shl_code=bash,true]mysql> select created_at, device_name, volume_id, volume_size, instance_uuid, deleted from nova.block_device_mapping where volume_id='d1e1194f-d33b-4169-9ce2-6644fa111d89' and deleted =0 \G
*************************** 1. row ***************************
created_at: 2016-03-15 07:24:38
device_name: /dev/vdc
volume_id: d1e1194f-d33b-4169-9ce2-6644fa111d89
volume_size: NULL
instance_uuid: 7dcfee9a-9338-489b-a335-215b8441d67b
deleted: 0
1 row in set (0.00 sec)[/mw_shl_code]
列名 | 解释 | created_at | 挂载云盘的时间记录 | device_name | 挂载后的虚拟磁盘设备 | volume_id | 云盘 UUID | deleted | 0, 代表云盘可用, 非0, 代表云盘已经已删除 | instance_uuid | 云主机 UUID |
注意, 当云盘被卸载后, deleted 显示非0 状态
|