Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Is this a "cinder design" problem? Maybe. It depends on how you are using it.

If your Cinder backend is LVM, then no. Why your hostname change, I don't know but that is the problem. All Openstack knows is this new cinder-volume service has started checking in, and the old one isn't responding. Since LVM is isolated to a single node and no distributed storage you wouldn't want remapping of things, that would be bad.

If you are using NFS, Ceph, GlusterFS, etc. then it is a problem with the Cinder architecture. Those distributed backends mean that any of the cinder-volume services can manage any of the Cinder volumes because it is all the same storage. The current (Juno) Cinder architecture doesn't work like that. The volume service that created the volume also manages the volume for its lifetime. So if that service goes down, you can't manage that volume anymore. The work around to this is to have several cinder-volume services running with the same hostname. That introduces some potential race issue, but they are rare. This issue is addressed in Kilo.

Just like anything else, if your hostname changes, you can't expect the project to absorb that change, nor would you want to.

To debug this issue, you would want to turn on verbose and debug in the Cinder conf and restart the cinder-scheduler service, it may provide useful information. If it doesn't you should see the cinder rabbit queue climbing rabbitmqctl list_queues | awk '$2 > 0'