[ceph] repair active+undersized+degraded PGs

asked 2017-08-12 18:50:32 -0600

Damian Dąbrowski gravatar image

Hello,

I've a trouble,

I've a ceph cluster with 3 replicas per OSD. When one of my OSD gone down I've replaced it with new one. But some of PGs which has one replica on broken OSD stuck in active+undersized+degraded state and can't find new OSD to store third replica(ATM all these PGs have only 2 replicas)

Does anybody met with this problem and have any solution?

I would be grateful for any help, I'm sending ceph status below

    cluster 848b340a-be27-45cb-ab66-3151d877a5a0
    health HEALTH_WARN
            190 pgs degraded
            190 pgs stuck degraded
            403 pgs stuck unclean
            190 pgs stuck undersized
            190 pgs undersized
            recovery 110202/1371648 objects degraded (8.034%)
            recovery 100459/1371648 objects misplaced (7.324%)
    monmap e10: 3 mons at {node-19=172.31.0.2:6789/0,node-20=172.31.0.8:6789/0,node-21=172.31.0.6:6789/0}
            election epoch 138, quorum 0,1,2 node-19,node-21,node-20
    osdmap e2223: 14 osds: 14 up, 14 in; 213 remapped pgs
    pgmap v2922628: 1088 pgs, 6 pools, 1856 GB data, 446 kobjects
            4942 GB used, 12932 GB / 17874 GB avail
            110202/1371648 objects degraded (8.034%)
            100459/1371648 objects misplaced (7.324%)
                685 active+clean
                213 active+remapped
                190 active+undersized+degraded
client io 726 kB/s rd, 892 kB/s wr, 230 op/s
edit retag flag offensive close merge delete

Comments

Did you solve this? I have a similar problem.

sacha-m gravatar imagesacha-m ( 2018-01-28 10:36:44 -0600 )edit