# how to make ceph bypass local storage of Glance images to boot? [closed]

Hi, I had a working openstack enveironment that used local lvm storage for vm's and volumes. Then i saw that ceph would be a better alternative to local storage and it supports live migration. So i went on an installed ceph.

From a storage pov i have 2 servers with 8x4TB hdd each and 3 compute nodes with 1x240GB ssd's each. Beside this i have 2 controllers aswell. So i added all storage devices to ceph and edited the crush map with 2 new rules, one for the osd's that reside on the hdd, so 16 osd on one rule and the other rule for the 3 ssd. Then i created 3 pools, volumes,images and ephemeral. Volumes and images were set to follow hdd rule and ephemeral ssd rule. I edited glance.conf, cinder.conf and nova.conf by the documentation on ceph site and made accounts for them. I copied the keys to the apropriete nodes, and followed the key integration for libvirt for the compute nodes.

So then i uploaded an image into glance, a ubuntu cloud image to be exact. When i tried to boot the image i got an error that no host was found. I checked nova-scheduler.log and found out that it was complaining about not having cinder.client key so it could not authentificate with cephx, although i integrated the key into libvirt. So i copied cinder.client key on the compute nodes into /etc/ceph. So now i could boot the ubuntu cloud image and from "ceph df" i can see it uses up space in ephemeral pool.

So then i went on and downloaded windows server 2012 triial image from openstack documentation. It's a 16GB image. I converted it to raw format, because as i read ceph doesnt support any other type of image to boot from except raw. I uploaded the newly raw image into glance and booted it up. To my surprise i got the same error that no valid host was found. So again i looked into nova-scheduler.log and this time is a different error. It says that there is not enough space left, althought i have more than enough. I might be doing something wrong, because my ceph.conf file is the same as it was when i installed it with ceph-deploy. I saw some samples on the net where some ceph.conf specified the osd's and other info.

I seriously don't know what to do next. Any help would be apreciated.

Update:

After doing more reading, i found somewhere that when i issue an instance creation from a glance image, the image is downloaded locally on a compute node and then uploaded into ceph again to the other pool.

The problem in my case is that i don't have enought local space on my compute node, only 9gb availabe and since windows 2012 image is about 16gb i get the not enought space error. Is there any ...

edit retag reopen merge delete

### Closed for the following reason question is not relevant or outdated by capsali close date 2014-11-27 15:37:41.883291

( 2014-11-27 16:06:28 -0600 )edit

Yeap. Thank again Sam!

( 2014-11-27 17:31:35 -0600 )edit

Sort by » oldest newest most voted

Hello, you may consider breaking that blob of text into paragraphs, it was difficult to get through!

But let me see if I understand your issue.

You built ceph and pointed glance and cinder to use ceph as a backed. When you go to boot an instance, you see errors in the logs with regards to ceph stating "there is not enough space left" or "no host was found".

In this case, may I suggest verifying the following information:

• Ensure you have an up to date /etc/ceph/ceph.conf on each cinder, glance, and compute node.
• You have created the appropriate cephx users and keys with access to the appropriate pools
• You have copied to appropriate keys to the cinder and glance node, and added the key to libvirt secret
• You have modified glance and cinder configs to use ceph as the backend, and the users in those configs match your cephx users

These are all the steps needed to successfully boot from ceph, but it is very easy to make a mistake in doing this espeically in the cephx section since the information for this has changed over the several ceph and openstack releases.

Follow this guide very closely. It will get you working as I have used it many time myself.

If you are still broken, I would suggest a simplier ceph setup (dont modify crush, leave out the ssd storage) until you are up and running. Then add in the appropriate tweaks once it is working.

more

Hi, Thanks for the reply. I did check everything you said. I even disabled cephx auth and still the problem persists. Some images boot on ephemeral and others just error with no space left. I forgot to say in the OP that i cannot delete any image uploaded with glance. It gets stuck on deleting.

( 2014-08-14 04:12:40 -0600 )edit

Can you post your cinder and glance configs and update the original post?

( 2014-08-14 10:48:27 -0600 )edit

Hi again,

After doing more research it seems it's not a problem but a feature that isnt't supported by ceph or openstack. When using rbd as backend for ephemeral storage, when booting a VM the image is first downloaded from rbd glance pool on local storage then uploaded to ephemeral pool.

There is a ticket opened for implemetation on next openstack but it seems it got droped for juno. The only solution is either to restrict booting from glance and just boot from volume, or to use cephfs as a shared filesystem for ephemeral storage. Both options don't work for me so i guess i need to rethink this.

How does gluster compare to ceph?

Should i try it or should i keep using ceph and let the ephemeral storage be local?

Update:

Got another problem that i can't resolve.

When i try to create a new instance i always get errors. Looking at nova-scheduler.log i can see the following error :

librados: client.cinder initialization error (2) No such file or directory


So from what i understand it's searching for the auth key in client.cinder, it's not there and can't authentificate with cephx. But from the ceph docs, nova-compute nodes don't need the ceph.client.cinder.keyring file as the secret is integrated into libvirt with virsh command. Did that but it's still asking for the file.

Any thoughts?

more

This is not entirely correct. The procedure you are using is _not_ the way the Ceph docs tell you to do it. Doing it this way, you may very well have these results.

The procedure is:

• Clone glance image to cinder volume (instant COW snapshot)
• Launch instance with cinder volume as boot
• Delete cinder volume with instance (I believe there is an option for this in nova boot)

With that method it all works. With the way you are doing it, it is also working as designed, just not the way you want it to work.

I will not comment on GlusterFS further than to say Ceph has much better integration with Openstack and better performance for VMs.

( 2014-08-20 09:37:09 -0600 )edit

So as i suspected you can't boot from image, the normal way for openstack, just boot from volume. The problem with this is that i got 2 pools, one resides on hdd and the other on ssd. I want to use the ssd pool for instance's root for better i/o and use hdd pool for instance volume provisioning.

( 2014-08-20 09:54:05 -0600 )edit

I am not sure I see the issue. Clone to the Glance image to ssd pool. Add a volume from the hdd pool.

Also, I am fairly certain this works without all that nonsense. But I am not in a position to test.

If I recall correctly, it does require glancev2 to make the snapshotting work set in cinder.

( 2014-08-20 11:03:17 -0600 )edit

Ah I think I fully understand the issue here. I was hasty before. You are speaking of this bug. This is indeed an issue, but it only effects glance -> ephemeral on ceph. We did ephemeral as local storage. glance -> cinder is COW which the commit fixes.

( 2014-08-27 11:47:28 -0600 )edit

You other error is because it is looking for a file named client.cinder Most likely your setup cephx wrong (easy to do since the docs use different names). You should delete the old keys and recreate them, following a single doc closer (firefly brach from ceph is a good one).

( 2014-08-27 11:49:32 -0600 )edit

# Get to know Ask OpenStack

Resources for moderators