Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

nova can't create instances using ceph

Can somebody give helps and insight with my problem regarding with creating an instance using ceph (rbd). As per checking on the logs it created on the ceph but after a few seconds automatically deleted. Tried using the command

rbd ls vms

My configs: /etc/nova/nova.conf

images_type=rbd
images_rbd_pool=vms
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
rbd_secret_uuid=1bf873c0-992d-4ce6-9a56-8b4cea83f0c8
inject_password=false
inject_key=false
inject_partition = -2

Details of the logs can be found here. On this test using Cent0S 7, Juno, and Giant (ceph) repositories.

Based on testing, I able to create a volumes and copy the images as a volume. The images is also is working perfectly. I even tried changing the ceph authentication from

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'

to

   ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images'

but still no luck, even recreated the whole ceph cluster. Even tried in the nova.conf enabling it with volume_drivers=rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver but still no effect. As it was not mentioned on their manual. As per notice I couldn't create a socket mentioned on the said manual that states:

This socket can be accessed like this:

ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help
Now on every compute nodes edit your Ceph configuration file:

[client]
    rbd cache = true
    rbd cache writethrough until flush = true
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok

which I commented out the last line --> admin socket at it generates error not exists.

nova can't create instances using ceph

Can somebody give helps and insight with my problem regarding with creating an instance using ceph (rbd). As per checking on the logs it created on the ceph but after a few seconds automatically deleted. Tried using the command

rbd ls vms

My configs: /etc/nova/nova.conf

images_type=rbd
images_rbd_pool=vms
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
rbd_secret_uuid=1bf873c0-992d-4ce6-9a56-8b4cea83f0c8
inject_password=false
inject_key=false
inject_partition = -2

Details of the logs can be found here. On this test using Cent0S 7, Juno, and Giant (ceph) repositories.

Based on testing, I able to create a volumes and copy the images as a volume. The images is also is working perfectly. I even tried changing the ceph authentication from

ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'

to

   ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rwx pool=images'

but still no luck, even recreated the whole ceph cluster. Even tried in the nova.conf enabling it with volume_drivers=rbd=nova.virt.libvirt.volume.LibvirtNetVolumeDriver but still no effect. As it was not mentioned on their manual. As per notice I couldn't create a socket mentioned on the said manual that states:

This socket can be accessed like this:

ceph daemon /var/run/ceph/ceph-client.cinder.19195.32310016.asok help
Now on every compute nodes edit your Ceph configuration file:

[client]
    rbd cache = true
    rbd cache writethrough until flush = true
    admin socket = /var/run/ceph/$cluster-$type.$id.$pid.$cctid.asok

which I commented out the last line --> admin socket at it generates error not exists.

By the way as an add-on updated the kernel to support rbd.

 modinfo rbd
filename:       /lib/modules/3.10.0-123.13.1.el7.x86_64/weak-updates/rbd/rbd.ko
license:        GPL
description:    RADOS Block Device (RBD) driver
author:         Jeff Garzik <jeff@garzik.org>
author:         Yehuda Sadeh <yehuda@hq.newdream.net>
author:         Sage Weil <sage@newdream.net>
author:         Alex Elder <elder@inktank.com>
srcversion:     422A74DB8EB4835AFBEF701
depends:        libceph
vermagic:       3.10.0-121.el7.x86_64 SMP mod_unload modversions
parm:           single_major:Use a single major number for all rbd devices (default: false) (bool)

-

rpm -qa | egrep "nova|selinux|ceph"
openstack-nova-novncproxy-2014.2.1-1.el7.centos.noarch
kmod-libceph-3.10-0.1.20140702gitdc9ac62.el7.x86_64
python-novaclient-2.20.0-1.el7.centos.noarch
openstack-nova-compute-2014.2.1-1.el7.centos.noarch
ceph-deploy-1.5.21-0.noarch
libselinux-python-2.2.2-6.el7.x86_64
openstack-nova-common-2014.2.1-1.el7.centos.noarch
ceph-common-0.87-0.el7.centos.x86_64
selinux-policy-targeted-3.12.1-153.el7_0.12.noarch
openstack-nova-conductor-2014.2.1-1.el7.centos.noarch
openstack-nova-scheduler-2014.2.1-1.el7.centos.noarch
libselinux-2.2.2-6.el7.x86_64
openstack-nova-api-2014.2.1-1.el7.centos.noarch
openstack-nova-cert-2014.2.1-1.el7.centos.noarch
libcephfs1-0.87-0.el7.centos.x86_64
ceph-0.87-0.el7.centos.x86_64
libselinux-ruby-2.2.2-6.el7.x86_64
openstack-selinux-0.5.19-2.el7ost.noarch
python-nova-2014.2.1-1.el7.centos.noarch
openstack-nova-console-2014.2.1-1.el7.centos.noarch
python-ceph-0.87-0.el7.centos.x86_64
ceph-release-1-0.el7.noarch
libselinux-utils-2.2.2-6.el7.x86_64
selinux-policy-3.12.1-153.el7_0.12.noarch