Ask Your Question
2

Ceph health warning all PGs inactive

asked 2014-06-11 07:57:24 -0500

Wayn3 gravatar image

I installed ceph using mkcephs and started the service using the following command

/etc/init.d/ceph -a start

When i check for the health of ceph i get the following output

HEALTH_WARN 384 pgs incomplete; 384 pgs stuck inactive; 384 pgs stuck unclean; 2 near full osd(s)

This is the output of the ceph -s command

cluster 89cbb30c-023b-4f8b-ac14-abc78fb6b07a
 health HEALTH_WARN 384 pgs incomplete; 384 pgs stuck inactive; 384 pgs stuck unclean; 2 near full osd(s)
 monmap e1: 1 mons at {a=100.112.12.28:6789/0}, election epoch 2, quorum 0 a
 osdmap e5: 2 osds: 2 up, 2 in
  pgmap v52: 384 pgs, 3 pools, 0 bytes data, 0 objects
        111 GB used, 8362 MB / 125 GB avail
             384 incomplete
edit retag flag offensive close merge delete

Comments

same problem here

Moe gravatar imageMoe ( 2015-03-04 07:02:31 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-07-05 06:11:23 -0500

dachary gravatar image

You have 2 OSDs in the cluster but the default pools require 3 OSDs at least (this is assuming you are deploying Ceph version Emperor or Firefly : see http://ceph.com/docs/master/rados/con... and osd pool default size in particular for more information). The simplest solution is to add another OSD to the cluster.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-06-11 07:57:24 -0500

Seen: 2,228 times

Last updated: Jul 05 '14