kolla-ansible ceph osd not up

asked 2019-05-23 15:37:27 -0600

shotoflove gravatar image

I've had good results with all-in-one, but my multinode is failing with I enable_ceph: "yes" and enable_glance: "yes" in /etc/kolla/globals.yml

I have three storage nodes with:

# parted /dev/sdb print
Disk /dev/sdb: 42.9GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name                      Flags
 1      1049kB  42.9GB  42.9GB               KOLLA_CEPH_OSD_BOOTSTRAP

[root@storage2 ~]#

Here is the health of the ceph cluster after deployment succeeds with no errors:

[root@controller ~]# docker exec ceph_mon ceph -s
    id:     5f6da406-0aac-4931-a2b1-b765bcc4fd0f
    health: HEALTH_WARN
            Reduced data availability: 32 pgs inactive

    mon: 3 daemons, quorum,,
    mgr: controller(active), standbys: controller2, controller3
    osd: 0 osds: 0 up, 0 in

    pools:   4 pools, 32 pgs
    objects: 0 objects, 0B
    usage:   0B used, 0B / 0B avail
    pgs:     100.000% pgs unknown
             32 unknown

[root@controller ~]#

I'm not certain what the issue is so here is the full command output of my deployment process. https://gist.github.com/michaelbarkdoll/35a1f5b0fae4ac0b6738eb74acda76e2 (https://gist.github.com/michaelbarkdo...)

edit retag flag offensive close merge delete

1 answer

Sort by ┬╗ oldest newest most voted

answered 2019-05-24 15:34:52 -0600

shotoflove gravatar image

My issue was that kolla-ansible now uses a different label with bluestore: parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP_BS 1 -1

I mistakenly used the older command: parted /dev/sdb -s -- mklabel gpt mkpart KOLLA_CEPH_OSD_BOOTSTRAP 1 -1

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2019-05-23 15:37:27 -0600

Seen: 287 times

Last updated: May 23 '19