Ask Your Question
0

Volume creation fails with error InvalidBDM: Block Device Mapping is Invalid.

asked 2015-03-03 15:23:55 -0600

gchongong gravatar image

I am running Juno 2.20 on Centos 7. The backend for block devices is ceph. When launching an instance with create volume option, I get the InvalidBDM exception... The ceph backend is healthy. Any pointers would be appreciated.

[ ~]$ ceph -s cluster 5ff21566-428e-421a-bba4-06e7214083f9 health HEALTH_OK monmap e1: 1 mons at {ceph-mon1=0.0.0.0:6789/0}, election epoch 1, quorum 0 ceph-mon1 osdmap e2098: 40 osds: 40 up, 40 in pgmap v25861: 3072 pgs, 4 pools, 8 bytes data, 2 objects 2793 MB used, 145 TB / 145 TB avail 3072 active+clean

Logs from cinder API.log

2015-03-03 11:58:23.350 22469 INFO cinder.api.v1.volumes [req-114a8a56-3560-477b-99b8-aaf25dbe7719 11acab4a34134c48983f67f0b71a7ca8 c7da979e74124951afe2483819daad6a - - -] vol={'migration_status': None, 'availability_zone': u'nova', 'terminated_at': None, 'updated_at': datetime.datetime(2015, 3, 3, 16, 56, 28), 'provider_geometry': None, 'replication_extended_status': None, 'replication_status': u'disabled', 'snapshot_id': None, 'ec2_id': None, 'mountpoint': None, 'deleted_at': None, 'id': u'a2b42a59-c494-4ea8-bf2e-06064dafdd2a', 'size': 1L, 'user_id': u'11acab4a34134c48983f67f0b71a7ca8', 'attach_time': None, 'attached_host': None, 'display_description': u'', 'volume_admin_metadata': [], 'project_id': u'c7da979e74124951afe2483819daad6a', 'launched_at': None, 'scheduled_at': datetime.datetime(2015, 3, 3, 16, 56, 28), 'status': u'creating', 'volume_type_id': None, 'deleted': False, 'provider_location': None, 'host': u'openstack-cinder.localdomain@ceph#ceph', 'consistencygroup_id': None, 'source_volid': None, 'provider_auth': None, 'display_name': u'', 'instance_uuid': None, 'bootable': False, 'created_at': datetime.datetime(2015, 3, 3, 16, 56, 27), 'attach_status': u'detached', 'volume_type': None, 'consistencygroup': None, 'volume_metadata': [], '_name_id': None, 'encryption_key_id': None, 'replication_driver_data': None} 2015-03-03 11:58:23.411 22469 INFO cinder.api.openstack.wsgi [req-114a8a56-3560-477b-99b8-aaf25dbe7719 11acab4a34134c48983f67f0b71a7ca8 c7da979e74124951afe2483819daad6a - - -] http://controller:8776/v1/c7da979e74124951afe2483819daad6a/volumes/a2b42a59-c494-4ea8-bf2e-06064dafdd2a (http://controller:8776/v1/c7da979e741...) returned with HTTP 200

Compute node logs

2015-03-03 11:59:51.507 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1847, in _prep_block_device 2015-03-03 11:59:51.507 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] raise exception.InvalidBDM() 2015-03-03 11:59:51.507 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] InvalidBDM: Block Device Mapping is Invalid. 2015-03-03 11:59:51.507 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] 2015-03-03 11:59:51.540 3355 ERROR nova.compute.manager [-] [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] Build of instance d2e93bac-aa63-456a-9678-efe65ad3c4b6 aborted: Failure prepping block device. 2015-03-03 11:59:51.540 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] Traceback (most recent call last): 2015-03-03 11:59:51.540 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _do_build_and_run_instance 2015-03-03 11:59:51.540 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] filter_properties) 2015-03-03 11:59:51.540 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2129, in _build_and_run_instance 2015-03-03 11:59:51.540 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] 'create.error', fault=e) 2015-03-03 11:59:51.540 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__ 2015-03-03 11:59:51.540 3355 TRACE nova.compute.manager [instance: d2e93bac-aa63-456a-9678-efe65ad3c4b6] six.reraise(self.type_, self.value, self.tb) 2015-03-03 11:59:51.540 3355 TRACE ... (more)

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-03-09 01:46:47 -0600

foster gravatar image

Issue is resolved by verifying that the mount point for the glance service is mounted to the controller node. This setup was using glusterfs as the mount point for glance and when the system rebooted that mount point was not mounted. Configured autofs for the mount point.

edit flag offensive delete link more

Comments

@foster Can you elaborate more on your comment?

lnxslck gravatar imagelnxslck ( 2015-03-10 12:25:19 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-03-03 15:23:55 -0600

Seen: 2,018 times

Last updated: Mar 09 '15