立即注册 登录
About云-梭伦科技 返回首页

leo_1989的个人空间 https://aboutyun.com/?1335 [收藏] [复制] [分享] [RSS]

日志

ceph-deploy部署osd故障

已有 2021 次阅读2015-9-25 17:23

centos6.4

内核版本linux-2.6.32

在deploy节点部署osd并激活,报错信息如下:

[root@cephadm my-cluster]# ceph-deploy osd prepare ceph02:/tmp/osd0 ceph03:/tmp/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy osd prepare ceph02:/tmp/osd0 ceph03:/tmp/osd1
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph02:/tmp/osd0: ceph03:/tmp/osd1:
[ceph02][DEBUG ] connected to host: ceph02 
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph02
[ceph02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph02][WARNIN] osd keyring does not exist yet, creating one
[ceph02][DEBUG ] create a keyring file
[ceph02][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph02 disk /tmp/osd0 journal None activate False
[ceph02][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /tmp/osd0
[ceph02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[ceph02][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /tmp/osd0
[ceph02][INFO  ] checking OSD status...
[ceph02][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph02 is now ready for osd use.
[ceph03][DEBUG ] connected to host: ceph03 
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph03
[ceph03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph03][WARNIN] osd keyring does not exist yet, creating one
[ceph03][DEBUG ] create a keyring file
[ceph03][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add
[ceph_deploy.osd][DEBUG ] Preparing host ceph03 disk /tmp/osd1 journal None activate False
[ceph03][INFO  ] Running command: ceph-disk -v prepare --fs-type xfs --cluster ceph -- /tmp/osd1
[ceph03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_xfs
[ceph03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mkfs_options_xfs
[ceph03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_xfs
[ceph03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_fs_mount_options_xfs
[ceph03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[ceph03][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /tmp/osd1
[ceph03][INFO  ] checking OSD status...
[ceph03][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[ceph_deploy.osd][DEBUG ] Host ceph03 is now ready for osd use.
Error in sys.exitfunc:
[root@cephadm my-cluster]# ceph-deploy osd activate ceph02:/tmp/osd0 ceph03:/tmp/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy osd activate ceph02:/tmp/osd0 ceph03:/tmp/osd1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph02:/tmp/osd0: ceph03:/tmp/osd1:
[ceph02][DEBUG ] connected to host: ceph02 
[ceph02][DEBUG ] detect platform information from remote host
[ceph02][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] activating host ceph02 disk /tmp/osd0
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph02][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /tmp/osd0
[ceph02][WARNIN] DEBUG:ceph-disk:Cluster uuid is 444d71f6-96ab-46c2-9a85-1296eee37949
[ceph02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph02][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph02][WARNIN] DEBUG:ceph-disk:OSD uuid is 511c3173-00c7-4f6c-961d-18afb66eb546
[ceph02][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph02][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise 511c3173-00c7-4f6c-961d-18afb66eb546
[ceph02][WARNIN] 2014-11-28 00:45:31.369757 7fc71c356700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc718024300 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc718024590).fault
[ceph02][WARNIN] 2014-11-28 00:45:34.343685 7fc71c255700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c000e90).fault
[ceph02][WARNIN] 2014-11-28 00:45:38.343903 7fc71c356700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c003010 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c0032a0).fault
[ceph02][WARNIN] 2014-11-28 00:45:41.346861 7fc71c255700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c000e90).fault
[ceph02][WARNIN] 2014-11-28 00:45:44.346756 7fc71c356700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c002820 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c002ab0).fault
[ceph02][WARNIN] 2014-11-28 00:45:47.347674 7fc71c255700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c0032a0).fault
[ceph02][WARNIN] 2014-11-28 00:45:50.348920 7fc71c356700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c002820 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c002ab0).fault
[ceph02][WARNIN] 2014-11-28 00:45:53.349823 7fc71c255700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c003fa0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c004230).fault
[ceph02][WARNIN] 2014-11-28 00:45:56.352388 7fc71c255700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c003fa0 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c0029c0).fault
[ceph02][WARNIN] 2014-11-28 00:45:59.353389 7fc71c356700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c004d90 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c005020).fault
[ceph02][WARNIN] 2014-11-28 00:46:02.355451 7fc71c255700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c007580 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c007810).fault
[ceph02][WARNIN] 2014-11-28 00:46:05.355165 7fc71c356700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c006000 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c006290).fault
[ceph02][WARNIN] 2014-11-28 00:46:08.356984 7fc71c255700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c006710 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c0069a0).fault
[ceph02][WARNIN] 2014-11-28 00:46:11.359265 7fc71c356700  0 -- :/1001757 >> 10.1.8.226:6789/0 pipe(0x7fc70c006000 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7fc70c006290).fault
^CKilled by signal 2.
[ceph_deploy][ERROR ] KeyboardInterrupt


Error in sys.exitfunc:
[root@cephadm my-cluster]# ceph-deploy osd activate ceph03:/tmp/osd1
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.20): /usr/bin/ceph-deploy osd activate ceph03:/tmp/osd1
[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks ceph03:/tmp/osd1:
[ceph03][DEBUG ] connected to host: ceph03 
[ceph03][DEBUG ] detect platform information from remote host
[ceph03][DEBUG ] detect machine type
[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final
[ceph_deploy.osd][DEBUG ] activating host ceph03 disk /tmp/osd1
[ceph_deploy.osd][DEBUG ] will use init type: sysvinit
[ceph03][INFO  ] Running command: ceph-disk -v activate --mark-init sysvinit --mount /tmp/osd1
[ceph03][WARNIN] DEBUG:ceph-disk:Cluster uuid is 444d71f6-96ab-46c2-9a85-1296eee37949
[ceph03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[ceph03][WARNIN] DEBUG:ceph-disk:Cluster name is ceph
[ceph03][WARNIN] DEBUG:ceph-disk:OSD uuid is f04fa85e-8a9a-4ef7-9106-2910f2bac2cc
[ceph03][WARNIN] DEBUG:ceph-disk:Allocating OSD id...
[ceph03][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring osd create --concise f04fa85e-8a9a-4ef7-9106-2910f2bac2cc
[ceph03][WARNIN] 2014-11-28 00:46:21.413720 7f87045c6700  0 -- :/1001958 >> 10.1.8.226:6789/0 pipe(0x7f8700024300 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f8700024590).fault
[ceph03][WARNIN] 2014-11-28 00:46:24.343829 7f87044c5700  0 -- :/1001958 >> 10.1.8.226:6789/0 pipe(0x7f86f4000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f86f4000e90).fault
[ceph03][WARNIN] 2014-11-28 00:46:27.343570 7f87045c6700  0 -- :/1001958 >> 10.1.8.226:6789/0 pipe(0x7f86f4003010 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f86f40032a0).fault
^C[ceph03][WARNIN] 2014-11-28 00:46:31.345362 7f87044c5700  0 -- :/1001958 >> 10.1.8.226:6789/0 pipe(0x7f86f4000c00 sd=4 :0 s=1 pgs=0 cs=0 l=1 c=0x7f86f4000e90).fault
^CKilled by signal 2.
[ceph_deploy][ERROR ] KeyboardInterrupt



建议重新ceph-deploy purge  之后再重新 install 然后ceph-deploy disk zap 你的osd节点,再重新ceph-deploy osd create












路过

雷人

握手

鲜花

鸡蛋

评论 (0 个评论)

facelist doodle 涂鸦板

您需要登录后才可以评论 登录 | 立即注册

关闭

推荐上一条 /2 下一条