Revision history [back]

click to hide/show revision 1
initial version

Horizion Automatic Disk Partitioning Fails, But Manual Works Fine

We have a fairly new openstack install, and we had previously done everything through the command line, so now we are facing some issues with horizion put on top. The main issue we have is when ever we are launching an instance we get a pop up error "Failed to launch instance": Please try again later [Error: No valid host was found.].

This only happens when using automatic disk partitioning, it deploys just fine when using manual disk partitioning (this is under the advanced options when launching an instance). When digging through the logs on the controller, I get this set of entries out of nova-conductor.log

2014-12-17 18:01:16.684 7997 ERROR nova.scheduler.utils [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Error from last host: compute-1a-01 (node hv-1a-01): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2014, in do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2149, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance ec1ee24a-43ba-44e0-8485-3f8fec803e8e was re-scheduled: ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:398fbf58-5bc2-79f3-e787-40f3ab011d9b']\n"]



2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] NoValidHost exception with message: 'No valid host was found.'


2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Setting instance to ERROR state.

To me this doesn't make sense as our VM's are in hvm mode

glance image-show f596113a-bb8b-449a-a72b-40574de81391
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| Property 'architecture' | x86_64                               |
| Property 'vm_mode'      | hvm                                  |
| checksum                | 3f113dc3577376f280f13ccc7f7f024f     |
| container_format        | ovf                                  |
| created_at              | 2014-12-18T00:59:33.864002           |
| deleted                 | False                                |
| disk_format             | vhd                                  |
| id                      | f596113a-bb8b-449a-a72b-40574de81391 |
| is_public               | True                                 |
| min_disk                | 0                                    |
| min_ram                 | 0                                    |
| name                    | tinycore                             |
| owner                   | 25c3010a8ebb42cfa361ce03a4593c64     |
| protected               | False                                |
| size                    | 30705815                             |
| status                  | active                               |
| updated_at              | 2014-12-18T01:07:40.141267           |
+-------------------------+--------------------------------------+

Not sure what to fiddle with next. Additional information:

Hypervisor = Xen

Controller and Compute OS = Ubuntu 14.04

Only a single hypervisor and single compute node are in the cluster as of now

didn't see any errors when launching from the CLI/command line

Installed manually from repositories

Version Juno (running apt list | grep nova): nova-compute-xen/trusty-updates 1:2014.2-0ubuntu1~cloud0 all

Horizion Automatic Disk Partitioning Fails, But Manual Works Fine

We have a fairly new openstack install, and we had previously done everything through the command line, so now we are facing some issues with horizion put on top. The main issue we have is when ever we are launching an instance we get a pop up error "Failed to launch instance": Please try again later [Error: No valid host was found.].

This only happens when using automatic disk partitioning, it deploys just fine when using manual disk partitioning (this is under the advanced options when launching an instance). When digging through the logs on the controller, I get this set of entries out of nova-conductor.log

2014-12-17 18:01:16.684 7997 ERROR nova.scheduler.utils [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Error from last host: compute-1a-01 (node hv-1a-01): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2014, in do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2149, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance ec1ee24a-43ba-44e0-8485-3f8fec803e8e was re-scheduled: ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:398fbf58-5bc2-79f3-e787-40f3ab011d9b']\n"]



2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] NoValidHost exception with message: 'No valid host was found.'


2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Setting instance to ERROR state.

To me this doesn't make sense as our VM's are in hvm mode

glance image-show f596113a-bb8b-449a-a72b-40574de81391
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| Property 'architecture' | x86_64                               |
| Property 'vm_mode'      | hvm                                  |
| checksum                | 3f113dc3577376f280f13ccc7f7f024f     |
| container_format        | ovf                                  |
| created_at              | 2014-12-18T00:59:33.864002           |
| deleted                 | False                                |
| disk_format             | vhd                                  |
| id                      | f596113a-bb8b-449a-a72b-40574de81391 |
| is_public               | True                                 |
| min_disk                | 0                                    |
| min_ram                 | 0                                    |
| name                    | tinycore                             |
| owner                   | 25c3010a8ebb42cfa361ce03a4593c64     |
| protected               | False                                |
| size                    | 30705815                             |
| status                  | active                               |
| updated_at              | 2014-12-18T01:07:40.141267           |
+-------------------------+--------------------------------------+

Not sure what to fiddle with next. Additional information:

Hypervisor = XenXenServer 6.5

Controller and Compute OS = Ubuntu 14.04

Only a single hypervisor and single compute node are in the cluster as of now

didn't see any errors when launching from the CLI/command line

Installed manually from repositories

Version Juno (running apt list | grep nova): nova-compute-xen/trusty-updates 1:2014.2-0ubuntu1~cloud0 all

Horizion Automatic Disk Partitioning Fails, But Manual Works Fine

We have a fairly new openstack install, and we had previously done everything through the command line, so now we are facing some issues with horizion put on top. The main issue we have is when ever we are launching an instance we get a pop up error "Failed to launch instance": Please try again later [Error: No valid host was found.].

This only happens when using automatic disk partitioning, it deploys just fine when using manual disk partitioning (this is under the advanced options when launching an instance). When digging through the logs on the controller, I get this set of entries out of nova-conductor.log

2014-12-17 18:01:16.684 7997 ERROR nova.scheduler.utils [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Error from last host: compute-1a-01 (node hv-1a-01): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2014, in do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2149, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance ec1ee24a-43ba-44e0-8485-3f8fec803e8e was re-scheduled: ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:398fbf58-5bc2-79f3-e787-40f3ab011d9b']\n"]



2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] NoValidHost exception with message: 'No valid host was found.'


2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Setting instance to ERROR state.

To me this doesn't make sense as our VM's are in hvm mode

glance image-show f596113a-bb8b-449a-a72b-40574de81391
+-------------------------+--------------------------------------+
| Property                | Value                                |
+-------------------------+--------------------------------------+
| Property 'architecture' | x86_64                               |
| Property 'vm_mode'      | hvm                                  |
| checksum                | 3f113dc3577376f280f13ccc7f7f024f     |
| container_format        | ovf                                  |
| created_at              | 2014-12-18T00:59:33.864002           |
| deleted                 | False                                |
| disk_format             | vhd                                  |
| id                      | f596113a-bb8b-449a-a72b-40574de81391 |
| is_public               | True                                 |
| min_disk                | 0                                    |
| min_ram                 | 0                                    |
| name                    | tinycore                             |
| owner                   | 25c3010a8ebb42cfa361ce03a4593c64     |
| protected               | False                                |
| size                    | 30705815                             |
| status                  | active                               |
| updated_at              | 2014-12-18T01:07:40.141267           |
+-------------------------+--------------------------------------+

Not sure what to fiddle with next. Additional information:

Hypervisor = XenServer 6.5

Controller and Compute OS = Ubuntu 14.04

Only a single hypervisor and single compute node are in the cluster as of now

didn't see any errors when launching from the CLI/command line

Installed manually from repositories

Version Juno (running apt list | grep nova): nova-compute-xen/trusty-updates 1:2014.2-0ubuntu1~cloud0 all

Addtional: Here is the stack trace from nova-compute doing an automatic disk partition. same thing occurs on 6.5 and 6.2

2015-01-09 13:56:38.008 28668 DEBUG nova.virt.xenapi.vm_utils [-] Cloned VDI OpaqueRef:f8ed5761-e421-4781-0c43-37a662371676 from VDI OpaqueRef:2fb50434-1e6c-5cee-4933-9f68d0f4eebc _clone_vdi /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:568
2015-01-09 13:56:38.612 28668 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "xenapi-image-cachef596113a-bb8b-449a-a72b-40574de81391" lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:238
2015-01-09 13:56:38.613 28668 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "_create_cached_image_impl" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275
2015-01-09 13:56:38.613 28668 INFO nova.virt.xenapi.vm_utils [-] Image creation data, cacheable: True, downloaded: False duration: 1.47 secs for image f596113a-bb8b-449a-a72b-40574de81391
2015-01-09 13:56:39.235 28668 DEBUG nova.virt.xenapi.vmops [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Updating progress to 20 _update_instance_progress /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py:854
2015-01-09 13:56:39.355 28668 DEBUG nova.virt.xenapi.vmops [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Updating progress to 30 _update_instance_progress /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py:854
2015-01-09 13:56:39.481 28668 DEBUG nova.virt.xenapi.vmops [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Using PV kernel: False _create_vm_record /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py:635
2015-01-09 13:56:39.494 28668 DEBUG nova.virt.xenapi.vm_utils [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Created VM create_vm /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:310
2015-01-09 13:56:39.495 28668 DEBUG nova.virt.xenapi.vmops [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Updating progress to 40 _update_instance_progress /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py:854
2015-01-09 13:56:39.641 28668 DEBUG nova.virt.xenapi.vm_utils [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Resizing up VDI OpaqueRef:f8ed5761-e421-4781-0c43-37a662371676 from 8589934592 to 42949672960 update_vdi_virtual_size /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:887
2015-01-09 13:56:39.997 28668 DEBUG nova.virt.xenapi.vmops [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Auto configuring disk, attempting to resize root disk... _attach_disks /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py:662
2015-01-09 13:56:40.001 28668 DEBUG nova.virt.xenapi.vm_utils [-] Creating disk-type VBD for VM OpaqueRef:1fe694a7-3da9-c399-a6ff-0a6d57f5fc6a, VDI OpaqueRef:f8ed5761-e421-4781-0c43-37a662371676 ...  create_vbd /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:443
2015-01-09 13:56:40.013 28668 DEBUG nova.virt.xenapi.vm_utils [-] Created VBD OpaqueRef:f3e1cd93-f430-0285-c2b2-ffd6fb27b235 for VM OpaqueRef:1fe694a7-3da9-c399-a6ff-0a6d57f5fc6a, VDI OpaqueRef:f8ed5761-e421-4781-0c43-37a662371676. create_vbd /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:447
2015-01-09 13:56:40.014 28668 DEBUG nova.virt.xenapi.vm_utils [-] Plugging VBD OpaqueRef:f3e1cd93-f430-0285-c2b2-ffd6fb27b235 ...  vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:2189
2015-01-09 13:56:40.014 28668 DEBUG nova.openstack.common.lockutils [-] Created new semaphore "xenapi-vbd-OpaqueRef:1fe694a7-3da9-c399-a6ff-0a6d57f5fc6a" internal_lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:206
2015-01-09 13:56:40.015 28668 DEBUG nova.openstack.common.lockutils [-] Acquired semaphore "xenapi-vbd-OpaqueRef:1fe694a7-3da9-c399-a6ff-0a6d57f5fc6a" lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:229
2015-01-09 13:56:40.016 28668 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "synchronized_plug" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:271
2015-01-09 13:56:40.026 28668 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "xenapi-vbd-OpaqueRef:1fe694a7-3da9-c399-a6ff-0a6d57f5fc6a" lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:238
2015-01-09 13:56:40.026 28668 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "synchronized_plug" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275
2015-01-09 13:56:40.033 28668 DEBUG nova.virt.xenapi.vm_utils [-] Destroying VBD for VDI OpaqueRef:f8ed5761-e421-4781-0c43-37a662371676 done. vdi_attached_here /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:2213
2015-01-09 13:56:40.034 28668 ERROR nova.utils [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Failed to spawn, rolling back
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397] Traceback (most recent call last):
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 555, in _spawn
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     attach_devices_step(undo_mgr, vm_ref, vdis, disk_image_type)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 140, in inner
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     rv = f(*args, **kwargs)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 488, in attach_devices_step
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     attach_disks(undo_mgr, vm_ref, vdis, disk_image_type)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 435, in attach_disks
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     admin_password, injected_files)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 665, in _attach_disks
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     flavor['root_gb'])
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 983, in try_auto_configure_disk
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     _auto_configure_disk(session, vdi_ref, new_gb)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 960, in _auto_configure_disk
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     with vdi_attached_here(session, vdi_ref, read_only=False) as dev:
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     return self.gen.next()
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 2190, in vdi_attached_here
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     session.VBD.plug(vbd_ref, this_vm_ref)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/client/objects.py", line 84, in plug
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     synchronized_plug()
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 272, in inner
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     return f(*args, **kwargs)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/client/objects.py", line 79, in synchronized_plug
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     self._call_method("plug", vbd_ref)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/client/objects.py", line 59, in _call_method
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     return self.session.call_xenapi(call, *args)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/client/session.py", line 187, in call_xenapi
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     return session.xenapi_request(method, args)
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in xenapi_request
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     result = _parse_result(getattr(self, methodname)(*full_params))
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in _parse_result
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]     raise Failure(result['ErrorDescription'])
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397] Failure: ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:1fe694a7-3da9-c399-a6ff-0a6d57f5fc6a']
2015-01-09 13:56:40.034 28668 TRACE nova.utils [instance: 39c4c478-7404-4f3c-858e-e27802991397]
2015-01-09 13:56:40.041 28668 WARNING nova.virt.xenapi.vm_utils [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] VM already halted, skipping shutdown...
2015-01-09 13:56:40.047 28668 DEBUG nova.virt.xenapi.vmops [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Destroying VDIs _destroy_vdis /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py:1342
2015-01-09 13:56:40.050 28668 DEBUG nova.virt.xenapi.vmops [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Using RAW or VHD, skipping kernel and ramdisk deletion _destroy_kernel_ramdisk /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py:1370
2015-01-09 13:56:40.069 28668 DEBUG nova.virt.xenapi.vm_utils [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] VM destroyed destroy_vm /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:322
2015-01-09 13:56:40.407 28668 ERROR nova.compute.manager [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Instance failed to spawn
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397] Traceback (most recent call last):
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2249, in _build_resources
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     yield resources
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2119, in _build_and_run_instance
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     block_device_info=block_device_info)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/driver.py", line 204, in spawn
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     admin_password, network_info, block_device_info)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 402, in spawn
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     network_info, block_device_info, name_label, rescue)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 572, in _spawn
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     undo_mgr.rollback_and_reraise(msg=msg, instance=instance)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/utils.py", line 813, in rollback_and_reraise
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     self._rollback()
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     six.reraise(self.type_, self.value, self.tb)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 555, in _spawn
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     attach_devices_step(undo_mgr, vm_ref, vdis, disk_image_type)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 140, in inner
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     rv = f(*args, **kwargs)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 488, in attach_devices_step
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     attach_disks(undo_mgr, vm_ref, vdis, disk_image_type)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 435, in attach_disks
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     admin_password, injected_files)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vmops.py", line 665, in _attach_disks
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     flavor['root_gb'])
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 983, in try_auto_configure_disk
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     _auto_configure_disk(session, vdi_ref, new_gb)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 960, in _auto_configure_disk
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     with vdi_attached_here(session, vdi_ref, read_only=False) as dev:
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     return self.gen.next()
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py", line 2190, in vdi_attached_here
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     session.VBD.plug(vbd_ref, this_vm_ref)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/client/objects.py", line 84, in plug
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     synchronized_plug()
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 272, in inner
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     return f(*args, **kwargs)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/client/objects.py", line 79, in synchronized_plug
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     self._call_method("plug", vbd_ref)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/client/objects.py", line 59, in _call_method
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     return self.session.call_xenapi(call, *args)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/lib/python2.7/dist-packages/nova/virt/xenapi/client/session.py", line 187, in call_xenapi
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     return session.xenapi_request(method, args)
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 133, in xenapi_request
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     result = _parse_result(getattr(self, methodname)(*full_params))
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]   File "/usr/local/lib/python2.7/dist-packages/XenAPI.py", line 203, in _parse_result
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]     raise Failure(result['ErrorDescription'])
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397] Failure: ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:1fe694a7-3da9-c399-a6ff-0a6d57f5fc6a']
2015-01-09 13:56:40.407 28668 TRACE nova.compute.manager [instance: 39c4c478-7404-4f3c-858e-e27802991397]
2015-01-09 13:56:40.410 28668 AUDIT nova.compute.manager [req-b92ff7e2-7cac-4b39-acfa-659feaef31dc None] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Terminating instance
2015-01-09 13:56:40.412 28668 INFO nova.virt.xenapi.vmops [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Destroying VM
2015-01-09 13:56:40.419 28668 WARNING nova.virt.xenapi.vmops [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] VM is not present, skipping destroy...
2015-01-09 13:56:40.420 28668 DEBUG nova.compute.claims [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] Aborting claim: [Claim: 4134 MB memory, 40 GB disk] abort /usr/lib/python2.7/dist-packages/nova/compute/claims.py:128
2015-01-09 13:56:40.420 28668 DEBUG nova.openstack.common.lockutils [-] Created new semaphore "compute_resources" internal_lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:206
2015-01-09 13:56:40.420 28668 DEBUG nova.openstack.common.lockutils [-] Acquired semaphore "compute_resources" lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:229
2015-01-09 13:56:40.421 28668 DEBUG nova.openstack.common.lockutils [-] Got semaphore / lock "abort_instance_claim" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:271
2015-01-09 13:56:40.450 28668 INFO nova.scheduler.client.report [-] Compute_service record updated for ('compute-1c-03', u'hv-1c-01')
2015-01-09 13:56:40.450 28668 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "compute_resources" lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:238
2015-01-09 13:56:40.451 28668 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "abort_instance_claim" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275
2015-01-09 13:56:40.452 28668 DEBUG nova.compute.utils [-] [instance: 39c4c478-7404-4f3c-858e-e27802991397] ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:1fe694a7-3da9-c399-a6ff-0a6d57f5fc6a'] notify_about_instance_usage /usr/lib/python2.7/      ackages/nova/compute/utils.py:307