launching instance 'boot from image (create a new volume)' gives error

asked 2015-05-25 07:34:57 -0600

capsali gravatar image

Hi,

I have an openstack installation on 7 servers: 2 controller/network nodes in HA, 3 compute nodes and 2 storage nodes. For block and ephemeral storage i use ceph as backend. Openstack version si Kilo.

Neutron is set up in dvr mode on computes and dvr_snat on controllers. Provider network is vlan from 100 to 199. Tenant network is vxlan.

I can launch an instance from image (ephemeral storage), can create volume from image and boot that volume, but when i try to select boot from image (create a new volume) i get an error in dashboard Failed to perform requested operation on instance "tessstt", the instance has an error status: Please try again later [Error: Build of instance 4bb9079a-54e2-400c-a096-71b21d134c87 aborted: Failure prepping block device.].

The error i see in nova-compute from the compute node is :

2015-05-25 14:47:50.354 5185 INFO nova.compute.claims [-] [instance: 4bb9079a-54e2-400c-a096-71b21d134c87] Attempting claim: memory 2048 MB, disk 20 GB
2015-05-25 14:47:50.354 5185 INFO nova.compute.claims [-] [instance: 4bb9079a-54e2-400c-a096-71b21d134c87] Total memory: 128909 MB, used: 12800.00 MB
2015-05-25 14:47:50.355 5185 INFO nova.compute.claims [-] [instance: 4bb9079a-54e2-400c-a096-71b21d134c87] memory limit: 193363.50 MB, free: 180563.50 MB
2015-05-25 14:47:50.356 5185 INFO nova.compute.claims [-] [instance: 4bb9079a-54e2-400c-a096-71b21d134c87] Total disk: 60257 GB, used: 110.00 GB
2015-05-25 14:47:50.356 5185 INFO nova.compute.claims [-] [instance: 4bb9079a-54e2-400c-a096-71b21d134c87] disk limit not specified, defaulting to unlimited
2015-05-25 14:47:50.378 5185 INFO nova.compute.claims [-] [instance: 4bb9079a-54e2-400c-a096-71b21d134c87] Claim successful
2015-05-25 14:47:50.443 5185 INFO nova.scheduler.client.report [-] Compute_service record updated for ('compute3', 'compute3.nimbusproject.ro')
2015-05-25 14:47:50.521 5185 INFO nova.scheduler.client.report [-] Compute_service record updated for ('compute3', 'compute3.nimbusproject.ro')
2015-05-25 14:47:50.574 5185 INFO nova.virt.block_device [req-53f73158-a383-46b2-b40d-03a057c4e971 9d6fed2183e54ce69de81341e86b37f9 610d740d5e0148b481401e6c4c79ddca - - -] [instance: 4bb9079a-54e2-400c-a096-71b21d134c87] Booting with volume None at /dev/vda
2015-05-25 14:47:50.838 5185 INFO nova.scheduler.client.report [-] Compute_service record updated for ('compute3', 'compute3.nimbusproject.ro')
2015-05-25 14:47:51.089 5185 ERROR nova.compute.manager [req-8a1038c9-0976-4c97-8601-c2747fd7b6f0 - - - - -] [instance: 4bb9079a-54e2-400c-a096-71b21d134c87] Instance failed block device setup
2015-05-25 14:47:51.089 5185 TRACE nova.compute.manager [instance: 4bb9079a-54e2-400c-a096-71b21d134c87] Traceback (most recent call last):
2015-05-25 14:47:51.089 5185 TRACE nova.compute.manager [instance: 4bb9079a-54e2-400c-a096-71b21d134c87]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1948, in _prep_block_device
2015-05-25 14:47:51.089 5185 TRACE nova.compute.manager [instance: 4bb9079a-54e2-400c-a096-71b21d134c87]     do_check_attach=do_check_attach) +
2015-05-25 14:47:51.089 5185 TRACE nova.compute.manager [instance: 4bb9079a-54e2-400c-a096-71b21d134c87]   File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 433, in attach_block_devices
2015-05-25 14:47:51.089 5185 TRACE nova.compute.manager [instance: 4bb9079a-54e2-400c-a096-71b21d134c87]     map(_log_and_attach, block_device_mapping)
2015-05-25 14:47:51.089 5185 TRACE nova.compute.manager [instance: 4bb9079a-54e2-400c-a096-71b21d134c87]   File "/usr/lib/python2.7/dist-packages/nova/virt/block_device.py", line 431, in _log_and_attach
2015-05-25 14:47:51.089 5185 TRACE nova.compute.manager [instance: 4bb9079a-54e2-400c-a096-71b21d134c87]     bdm.attach(*attach_args, **attach_kwargs)
2015-05-25 14:47:51.089 5185 TRACE nova.compute.manager [instance: 4bb9079a-54e2-400c-a096-71b21d134c87 ...
(more)
edit retag flag offensive close merge delete

Comments

I'm having the exact same issue. Using NetApp SAN as backend. But on Kilo as well.

jwitko1 gravatar imagejwitko1 ( 2015-08-01 00:45:55 -0600 )edit