Ask Your Question

ceph-fs No MDS detected using current configuration

asked 2019-03-21 04:51:52 -0500

seffyroff gravatar image

I'm deploying Ceph via Juju. Using the lastest version of the openstack-charmers-next osd, mon and fs charms. Host OS is Bionic and install source is cloud:bionic-rocky. 3 OSDs (metal), 3 mons (lxd) and 2 MDS (lxd).

Everything stands up great, and initially the MDS works (and the cephfs_data and cephfs_metadata pools are created). However as the install is settling, once the second and third mons come online the MDS status changes to "No MDS detected using current configuration" and the fs pools go offline.

It looks like this:

Every 2.0s: juju status --color                                                                                                              

Model  Controller  Cloud/Region  Version  SLA          Timestamp
base4  homelab     homelab       2.5.1    unsupported  02:49:07-07:00

App       Version       Status   Scale  Charm     Store       Rev  OS      Notes
ceph-fs   13.2.4+dfsg1  blocked      2  ceph-fs   jujucharms   42  ubuntu
ceph-mon  13.2.4+dfsg1  active       3  ceph-mon  jujucharms  380  ubuntu
ceph-osd  13.2.4+dfsg1  active       3  ceph-osd  jujucharms  399  ubuntu

Unit         Workload  Agent  Machine  Public address  Ports  Message
ceph-fs/0    blocked   idle   1/lxd/0             No MDS detected using current configuration
ceph-fs/1*   blocked   idle   0/lxd/0             No MDS detected using current configuration
ceph-mon/0*  active    idle   0/lxd/1             Unit is ready and clustered
ceph-mon/1   active    idle   1/lxd/1             Unit is ready and clustered
ceph-mon/2   active    idle   2/lxd/0             Unit is ready and clustered
ceph-osd/0   active    idle   0            Unit is ready (1 OSD)
ceph-osd/1   active    idle   1            Unit is ready (1 OSD)
ceph-osd/2*  active    idle   2            Unit is ready (1 OSD)

Machine  State    DNS          Inst id              Series  AZ  Message
0        started  manual:   bionic      Manually provisioned machine
0/lxd/0  started   juju-f14309-0-lxd-0  bionic      Container started
0/lxd/1  started   juju-f14309-0-lxd-1  bionic      Container started
1        started  manual:   bionic      Manually provisioned machine
1/lxd/0  started   juju-f14309-1-lxd-0  bionic      Container started
1/lxd/1  started   juju-f14309-1-lxd-1  bionic      Container started
2        started  manual:   bionic      Manually provisioned machine
2/lxd/0  started   juju-f14309-2-lxd-0  bionic      Container started
edit retag flag offensive close merge delete


Is this related to OpenStack? If not, please ask the question in an Ubuntu or Ceph forum.

Bernd Bausch gravatar imageBernd Bausch ( 2019-03-21 05:54:56 -0500 )edit

The Juju charms I'm using are provided by the Openstack Dev team, and I suspect this is related to the way the charm works, as it's designed to work with an Openstack deployment.

seffyroff gravatar imageseffyroff ( 2019-03-21 11:18:26 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2019-11-04 20:04:02 -0500

I got the same problem. openstack-charmers/ceph-fs:26 bionic-train

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools


Asked: 2019-03-21 04:51:52 -0500

Seen: 62 times

Last updated: Mar 21 '19