Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

kolla-ansible ceph osd not up

I've had good results with all-in-one, but my multinode is failing with I enable_ceph: "yes" and enable_glance: "yes" in /etc/kolla/globals.yml

I have three storage nodes with:

# parted /dev/sdb print
Disk /dev/sdb: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name                      Flags
 1      1049kB  42.9GB  42.9GB               KOLLA_CEPH_OSD_BOOTSTRAP

[root@storage2 ~]#

Here is the health of the ceph cluster after deployment succeeds with no errors:

[root@controller ~]# docker exec ceph_mon ceph -s
    id:     5f6da406-0aac-4931-a2b1-b765bcc4fd0f
    health: HEALTH_WARN
            Reduced data availability: 32 pgs inactive

    mon: 3 daemons, quorum,,
    mgr: controller(active), standbys: controller2, controller3
    osd: 0 osds: 0 up, 0 in

    pools:   4 pools, 32 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:     100.000% pgs unknown
             32 unknown

[root@controller ~]#

I'm not certain what the issue is so here is the full command output of my deployment process.