systemctl restart ceph-osd@osd-node3
[root@osd-node3 ~]# systemctl status -l ceph-osd@osd-node3
● ceph-osd@osd-node3.service - Ceph object storage daemon
Loaded: loaded ([url=]/usr/lib/systemd/system/ceph-osd@.service[/url]; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since 三 2016-07-20 15:23:56 CST; 708ms ago
Process: 11140 ExecStart=/usr/bin/ceph-osd -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Process: 11099 ExecStartPre=/usr/lib/ceph/ceph-osd-prestart.sh --cluster ${CLUSTER} --id %i (code=exited, status=0/SUCCESS)
Main PID: 11140 (code=exited, status=1/FAILURE)
7月 20 15:23:56 osd-node3 systemd[1]: ceph-osd@osd-node3.service: main process exited, code=exited, status=1/FAILURE
7月 20 15:23:56 osd-node3 systemd[1]: Unit ceph-osd@osd-node3.service entered failed state.
7月 20 15:23:56 osd-node3 systemd[1]: ceph-osd@osd-node3.service failed.
7月 20 15:23:56 osd-node3 systemd[1]: ceph-osd@osd-node3.service holdoff time over, scheduling restart.
7月 20 15:23:56 osd-node3 systemd[1]: start request repeated too quickly for ceph-osd@osd-node3.service
7月 20 15:23:56 osd-node3 systemd[1]: Failed to start Ceph object storage daemon.
7月 20 15:23:56 osd-node3 systemd[1]: Unit ceph-osd@osd-node3.service entered failed state.
7月 20 15:23:56 osd-node3 systemd[1]: ceph-osd@osd-node3.service failed.
ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.13889 root default
-2 0.04630 host osd-node3
0 0.04630 osd.0 up 1.00000 1.00000
-3 0.04630 host osd-node1
1 0.04630 osd.1 up 1.00000 1.00000
-4 0.04630 host osd-node2
2 0.04630 osd.2 up 1.00000 1.00000 [root@osd-node3 ~]#
ceph osd stat
osdmap e15: 3 osds: 3 up, 3 in
flags sortbitwise
[root@mon-node2 ceph]# systemct status -l ceph-mon@mon-node2
● ceph-mon@mon-node2.service - Ceph cluster monitor daemon
Loaded: loaded ([url=]/usr/lib/systemd/system/ceph-mon@.service[/url]; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since 三 2016-07-20 15:14:09 CST; 3min 54s ago
Process: 8399 ExecStart=/usr/bin/ceph-mon -f --cluster ${CLUSTER} --id %i --setuser ceph --setgroup ceph (code=exited, status=1/FAILURE)
Main PID: 8399 (code=exited, status=1/FAILURE)
7月 20 15:14:09 mon-node2 systemd[1]: ceph-mon@mon-node2.service: main process exited, code=exited, status=1/FAILURE
7月 20 15:14:09 mon-node2 systemd[1]: Unit ceph-mon@mon-node2.service entered failed state.
7月 20 15:14:09 mon-node2 systemd[1]: ceph-mon@mon-node2.service failed.
7月 20 15:14:09 mon-node2 systemd[1]: ceph-mon@mon-node2.service holdoff time over, scheduling restart.
7月 20 15:14:09 mon-node2 systemd[1]: start request repeated too quickly for ceph-mon@mon-node2.service
7月 20 15:14:09 mon-node2 systemd[1]: Failed to start Ceph cluster monitor daemon.
7月 20 15:14:09 mon-node2 systemd[1]: Unit ceph-mon@mon-node2.service entered failed state.
7月 20 15:14:09 mon-node2 systemd[1]: ceph-mon@mon-node2.service failed.
[root@mon-node2 ceph]# ceph -s
cluster fafcdcaa-48ff-460e-bd41-36bd013b6529
health HEALTH_OK
monmap e7: 3 mons at {mon-node1=172.16.1.172:6789/0,mon-node2=172.16.1.173:6789/0,mon-node3=172.16.1.174:6789/0}
election epoch 26, quorum 0,1,2 mon-node1,mon-node2,mon-node3
osdmap e15: 3 osds: 3 up, 3 in
flags sortbitwise
pgmap v114: 64 pgs, 1 pools, 0 bytes data, 0 objects
18875 MB used, 123 GB / 142 GB avail
64 active+clean
请大神们看看这是咋回事,还有就是监控节点时间不同步的话,我更改配置文件想重启服务起不了,我装的是[root@mon-node2 ceph]# ceph --version
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
|
|