Hi
As per the OpenStack official documentation, we have created 3 node OpenStack Juno cluster using Centos7.
Later integrated it with 3 node Ceph storage. Ceph status is health is OK.
[root@controller yum.repos.d]# ceph --version ceph version 0.80.7 (6c0127fcb58008793d3c8b62d925bc91963672a3) [root@controller yum.repos.d]# ceph status cluster 41bdadbf-2497-44d8-8993-87ace1fc5c2d health HEALTH_OK monmap e3: 3 mons at {node1=192.168.1.75:6789/0,node2=192.168.1.82:6789/0,node3=192.168.1.81:6789/0}, election epoch 170, quorum 0,1,2 node1,node3,node2 mdsmap e88: 1/1/1 up {0=node1=up:active} osdmap e223: 3 osds: 3 up, 3 in pgmap v58386: 1110 pgs, 10 pools, 2239 MB data, 280 kobjects 34020 MB used, 106 GB / 147 GB avail 1110 active+clean
We have successfully configured the Glance and Cinder as per the documentation.
Now we could able to create a volume without issues, but while creating a instance, getting below error.
2014-12-30 18:31:58.810 15629 ERROR nova.compute.manager [-] [instance: 2d4e7118-97be-461b-987f-5a907a6771fa]
Instanc e failed to spawn
73339 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] Traceback ( most recent call last):
73340 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/nova/compute/manager.py", line 2243, in
_build_resources
73341 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] yield r esources
73342 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/nova/compute/manager.py", line 2113, in
_build_and_run_instance
73343 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] block_d evice_info=block_device_info)
73344 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2621, in
spawn
73345 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] block_d evice_info, disk_info=disk_info)
73346 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4414, in
_create_domain_and_network
73347 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] power_o n=power_on)
73348 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4338, in
_create_domain
73349 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] LOG.err or(err)
73350 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in
__exit__
73351 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] six.rer aise(self.type_, self.value, self.tb)
73352 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4329, in
_create_domain
73353 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] domain. createWithFlags(launch_flags)
73354 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
73355 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] result = proxy_call(self._autowrap, f, args, kwargs)
73356 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-5a907a6771fa]
rv = ex ecute(f, *args, *kwargs)
73358 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
73359 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] six.rer aise(c, e, tb)
73360 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
73361 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] rv = me th(args, *kwargs)
73362 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] File "/us r/lib64/python2.7/site-packages/libvirt.py", line 728, in createWithFlags
73363 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
73364 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] libvirtErro r: internal error: process exited while connecting to monitor: Warning: option
deprecated, use lost_tick_policy prope rty of kvm-pit instead.
73365 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] qemu-kvm: - drive file=rbd:volumes/volume-3d020597-acdf-442c-bf82-
604156bc3ef4:id=cinder:key=AQAch6FUGC4JCxAAmFbjKiVZPT5WqPos1LrM 4g==:auth_supported=cephx
\;none:mon_host=192.168.1.75\:6789\;192.168.1.81\:6789\;192.168.1.82\:6789,if=none,id=drive- virtio-
disk0,format=raw,serial=3d020597-acdf-442c-bf82-604156bc3ef4,cache=none: **could not open disk image rbd:volumes
/volume-3d020597-acdf-442c-bf82-
604156bc3ef4:id=cinder:key=AQAch6FUGC4JCxAAmFbjKiVZPT5WqPos1LrM4g==:auth_supported=ce phx
\;none:mon_host=192.168.1.75\:6789\;192.168.1.81\:6789\;192.168.1.82\:6789: Unknown protocol
73366 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa]
73367 2014-12-30 18:31:58.810 15629 TRACE nova.compute.manager [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa]
73368 2014-12-30 18:31:58.886 15629 AUDIT nova.compute.manager [req-1a613a67-535b-4b13-9ba4-8dd461429d95
None] [instance: 2 d4e7118-97be-461b-987f-5a907a6771fa] Terminating instance
73369 2014-12-30 18:31:58.910 15629 INFO nova.virt.libvirt.driver [-] [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] Inst ance destroyed successfully.
73370 2014-12-30 18:31:59.784 15629 INFO nova.virt.libvirt.driver [-] [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] Dele ting instance files /var/lib/nova/instances/2d4e7118-97be-461b-987f-5a907a6771fa_del
73371 2014-12-30 18:31:59.785 15629 INFO nova.virt.libvirt.driver [-] [instance: 2d4e7118-97be-461b-987f-
5a907a6771fa] Dele tion of /var/lib/nova/instances/2d4e7118-97be-461b-987f-5a907a6771fa_del complete
73372 2014-12-30 18:32:00.683 15629 INFO nova.scheduler.client.report [-] Compute_service record updated for
('compute1', ' compute1')
73373 2014-12-30 18:32:57.888 15629 AUDIT nova.compute.resource_tracker [-] Auditing locally available
compute resources
73374 2014-12-30 18:32:58.125 15629 AUDIT nova.compute.resource_tracker [-] Total physical ram (MB): 1837,
total allocated virtual ram (MB): 1024
73375 2014-12-30 18:32:58.125 15629 AUDIT nova.compute.resource_tracker [-] Free disk (GB): 146
73376 2014-12-30 18:32:58.126 15629 AUDIT nova.compute.resource_tracker [-] Total usable vcpus: 2, total
allocated vcpus: 0
73377 2014-12-30 18:32:58.126 15629 AUDIT nova.compute.resource_tracker [-] PCI stats: []
&
[root@compute1 ~]# rpm -qa | grep -i qemu qemu-img-1.5.3-60.el7_0.11.x86_64 qemu-kvm-1.5.3-60.el7_0.11.x86_64 libvirt-daemon-driver-qemu-1.1.1-29.el7_0.3.x86_64 qemu-kvm-common-1.5.3-60.el7_0.11.x86_64 ipxe-roms-qemu-20130517-5.gitc4bce43.el7.noarch
While searching to debug it (as per below urls), found the Centos 7 does not include the rbd packages related to qemu-kvm.
http://karan-mj.blogspot.in/2013/12/ceph-openstack-part-1.html
https://eucalyptus.atlassian.net/wiki/pages/viewpage.action?pageId=60882971
http://ceph.com/packages/ceph-extras/rpm/
Appreciate if someone can provide any alternative solution how to fix it.
Thanks, Santosh