Ask Your Question
0

nova can't create instances using ceph

asked 2014-12-18 16:02:19 -0500

tahder gravatar image

updated 2014-12-18 16:20:49 -0500

Can somebody give helps and insight with my problem regarding with creating an instance using ceph (rbd). As per checking on the logs it created on the ceph but after a few seconds automatically deleted. Tried using the command

rbd ls vms

My configs: /etc/nova/nova.conf

images_type=rbd
images_rbd_pool=vms
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
rbd_secret_uuid=1bf873c0-992d-4ce6-9a56-8b4cea83f0c8
inject_password=false
inject_key=false
inject_partition = -2

Details of the logs can be found here. On this test using Cent0S 7, Juno, and Giant (ceph) repositories.

Based on testing, I able to create a volumes and copy the images as a volume. The images is also is working perfectly. I even tried changing the ceph authentication from

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'

to

   ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images'

but still no luck, even recreated the whole ceph cluster. Even tried in the nova.conf enabling it with volume_drivers=rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver but still no effect. As it was not mentioned on their manual. As per notice I couldn't create a socket mentioned on the said manual that states:

This socket can be accessed like this:

ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help
Now on every compute nodes edit your Ceph configuration file:

[client]
    rbd cache = true
    rbd cache writethrough until flush = true
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok

which I commented out the last line --> admin socket at it generates error not exists.

By the way as an add-on updated the kernel to support rbd.

 modinfo rbd
filename:       /lib/modules/3.10.0-123.13.1.el7.x86_64/weak-updates/rbd/rbd.ko
license:        GPL
description:    RADOS Block Device (RBD) driver
author:         Jeff Garzik <jeff@garzik.org>
author:         Yehuda Sadeh <yehuda@hq.newdream.net>
author:         Sage Weil <sage@newdream.net>
author:         Alex Elder <elder@inktank.com>
srcversion:     422A74DB8EB4835AFBEF701
depends:        libceph
vermagic:       3.10.0-121.el7.x86_64 SMP mod_unload modversions
parm:           single_major:Use a single major number for all rbd devices (default: false) (bool)

rpm -qa | egrep "nova|selinux|ceph"
openstack-nova-novncproxy-2014.2.1-1.el7.centos.noarch
kmod-libceph-3.10-0.1.20140702gitdc9ac62.el7.x86_64
python-novaclient-2.20.0-1.el7.centos.noarch
openstack-nova-compute-2014.2.1-1.el7.centos.noarch
ceph-deploy-1.5.21-0.noarch
libselinux-python-2.2.2-6.el7.x86_64
openstack-nova-common-2014.2.1-1.el7.centos.noarch
ceph-common-0.87-0.el7.centos.x86_64
selinux-policy-targeted-3.12.1-153.el7_0.12.noarch
openstack-nova-conductor-2014.2.1-1.el7.centos.noarch
openstack-nova-scheduler-2014.2.1-1.el7.centos.noarch
libselinux-2.2.2-6.el7.x86_64
openstack-nova-api-2014.2.1-1.el7.centos.noarch
openstack-nova-cert-2014.2.1-1.el7.centos.noarch
libcephfs1-0.87-0.el7.centos.x86_64
ceph-0.87-0.el7.centos.x86_64
libselinux-ruby-2.2.2-6.el7.x86_64
openstack-selinux-0.5.19-2.el7ost.noarch
python-nova-2014.2.1-1.el7.centos.noarch
openstack-nova-console-2014.2.1-1.el7.centos.noarch
python-ceph-0.87-0.el7.centos.x86_64
ceph-release-1-0.el7.noarch
libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.12.1-153.el7_0.12.noarch
edit retag flag offensive close merge delete

Comments

Some say that qemu-img should have rbd but I could not reproduce it based on the binary from base, epel or ceph; probably unless recompile it from source.

qemu-img -h

Look in the last line under Supported formats.

tahder gravatar imagetahder ( 2014-12-18 16:27:08 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2016-04-13 10:08:26 -0500

eblock gravatar image

Hi, I see it's been a while since you asked your question, have you managed it to work? I've been working with Ceph for about a week or two now, and I was facing the same issue. I tried to launch an instance using a Ceph image, and for two or three seconds I saw the newly created disk in the rbd pool, then it disappeared. But when I tried to boot-from-volume via Cinder using the exact same image it worked! I'm quite new to all of this, so I'm not really sure, but I assume that cinder and nova have different approaches accessing rbd images/volumes. Besides configuring Ceph, Cinder, Nova and Glance, please notice that I'm working with Mitaka and xen compute nodes right now. I also tested it with kvm as hypervisor, it worked right away. I should also mention that I have full rights for my ceph client, I tried it with restrictions similar to yours, it didn't work at all. I hope my pointers is helping you or anybode else.

edit flag offensive delete link more

Comments

Yes @eblock I used to get same issue with it before but I notice that when you create an instance, by default it will be created in the local machine first (it create a base image) say in your compute node. Which I found out I was run out of space for that partition /var/lib/nova/instances/_base/

tahder gravatar imagetahder ( 2016-09-12 23:16:39 -0500 )edit

Even if you work with Ceph as backend it can result in a local base image on the compute node(s). You have to disable image cache by setting cache_image = none in /etc/nova/nova.conf on your compute nodes, then delete the images in /var/lib/nova/instances/_base to clear the cache.

eblock gravatar imageeblock ( 2019-02-01 09:03:42 -0500 )edit

Here is a more detailed answer.

eblock gravatar imageeblock ( 2019-02-01 09:04:06 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-12-18 16:02:19 -0500

Seen: 1,008 times

Last updated: Dec 18 '14