Ask Your Question
0

ceph status

asked 2013-08-29 06:22:52 -0500

vladber gravatar image

Settings ceph for the first time for cisco os .

Here is the output I get :

root@ceph-osd01:~# ceph status health HEALTH_ERR 320 pgs stuck inactive; 320 pgs stuck unclean; no osds monmap e1: 1 mons at {0=10.63.16.204:6789/0}, election epoch 2, quorum 0 0 osdmap e2: 0 osds: 0 up, 0 in pgmap v3: 320 pgs: 320 creating; 0 bytes data, 0 KB used, 0 KB / 0 KB avail mdsmap e1: 0/0/1 up

On mon node log says;

2013-08-29 11:22:05.062425 7fbcfee51700 0 mon.0@0(leader).data_health(1) update_stats avail 90% total 32911064 used 1352872 avail 29886368 2013-08-29 11:22:05.856002 7fbcfe650700 0 mon.0@0(leader) e1 handle_command mon_command(status v 0) v1

how to tarce the problem ?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2013-08-29 06:36:28 -0500

vladber gravatar image

I inspected ceph.conf om mon node - is there osd addr missing ?

[global] auth cluster required = cephx auth service required = cephx auth client required = cephx keyring = /etc/ceph/keyring

fsid = e80afa94-a64c-486c-9e34-d55e85f26406

[mon] mon data = /var/lib/ceph/mon/mon.$id

[osd] osd journal size = 4096 cluster network = 10.63.16.0/24 public network = 10.63.16.0/24 filestore flusher = false osd data = /var/lib/ceph/osd/osd.$id osd journal = /var/lib/ceph/osd/osd.$id/journal osd mkfs type = xfs keyring = /var/lib/ceph/osd/osd.$id/keyring

[mds] mds data = /var/lib/ceph/mds/mds.$id keyring = /var/lib/ceph/mds/mds.$id/keyring

[mon.0] host = ceph-mon01 mon addr = 10.63.16.204:6789

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-08-29 06:22:52 -0500

Seen: 592 times

Last updated: Aug 29 '13