Instance creation fails (Block Device Mapping)
I'm using Openstack 1.0.1 on Centos 7 64bits, with 3 controllers, 2 compute nodes and storage mounted through nfs.
On Horizon i'm trying to launch a new instance (Boot from image - creates a new volume), and i keep getting the error:
Error: Failed to launch instance "windows_test2": Please try again later [Error: Build of instance e541f467-9a49-4fbf-b8ae-a792c389aa98 aborted: Failure prepping block device.].
- The new volume is created and it's OK
- An IP is assigned to the instance.
- Instance creation stops and shows status ERROR
The status displays Block device mapping for a long time, until it stops and gives the error presented above.
I've tried launching the instance through the command line but the error remains.
On the controller node cinder logs (volume and api) reports no error (logs are with debug=True).
On the compute node nova-compute.log shows:
2015-03-10 16:51:21.938 72172 ERROR nova.compute.manager [-] [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] Instance failed block device setup
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] Traceback (most recent call last):
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1819, in _prep_block_device
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] do_check_attach=do_check_attach) +
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 407, in attach_block_devices
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] map(_log_and_attach, block_device_mapping)
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 405, in _log_and_attach
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] bdm.attach(*attach_args, **attach_kwargs)
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] File "/usr/lib/python2.7/site-packages/nova/virt/block_device.py", line 333, in attach
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] wait_func(context, vol['id'])
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1263, in _await_block_device_map_created
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] attempts=attempts)
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] VolumeNotCreated: Volume 31d43244-7f63-4088-82de-6a521c384404 did not finish being created even after we waited 254 seconds or 61 attempts.
2015-03-10 16:51:21.938 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98]
2015-03-10 16:51:21.939 72172 ERROR nova.compute.manager [-] [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] Failure prepping block device
2015-03-10 16:51:21.939 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] Traceback (most recent call last):
2015-03-10 16:51:21.939 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2218, in _build_resources
2015-03-10 16:51:21.939 72172 TRACE nova.compute.manager [instance: e541f467-9a49-4fbf-b8ae-a792c389aa98] block_device_mapping)
2015-03-10 16:51 ...
Is cinder-volume running on more then one controller?, If so try turning that service off on 2 controllers and see if problem still persists
@jdexter Thank you for your help. Yes i had cinder-volume running on all three controllers, i shut it down on two controllers, tried again, and the same error happened. Got more clues?
What is in your cinder logs on the storage node and what about your cinder API logs on the controller?