Horizion Automatic Disk Partitioning Fails, But Manual Works Fine
We have a fairly new openstack install, and we had previously done everything through the command line, so now we are facing some issues with horizion put on top. The main issue we have is when ever we are launching an instance we get a pop up error "Failed to launch instance": Please try again later [Error: No valid host was found.].
This only happens when using automatic disk partitioning, it deploys just fine when using manual disk partitioning (this is under the advanced options when launching an instance). When digging through the logs on the controller, I get this set of entries out of nova-conductor.log
2014-12-17 18:01:16.684 7997 ERROR nova.scheduler.utils [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Error from last host: compute-1a-01 (node hv-1a-01): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2014, in do_build_and_run_instance\n filter_properties)\n', u' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2149, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance ec1ee24a-43ba-44e0-8485-3f8fec803e8e was re-scheduled: ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:398fbf58-5bc2-79f3-e787-40f3ab011d9b']\n"]
2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] NoValidHost exception with message: 'No valid host was found.'
2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Setting instance to ERROR state.
To me this doesn't make sense as our VM's are in hvm mode
glance image-show f596113a-bb8b-449a-a72b-40574de81391
+-------------------------+--------------------------------------+
| Property | Value |
+-------------------------+--------------------------------------+
| Property 'architecture' | x86_64 |
| Property 'vm_mode' | hvm |
| checksum | 3f113dc3577376f280f13ccc7f7f024f |
| container_format | ovf |
| created_at | 2014-12-18T00:59:33.864002 |
| deleted | False |
| disk_format | vhd |
| id | f596113a-bb8b-449a-a72b-40574de81391 |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | tinycore |
| owner | 25c3010a8ebb42cfa361ce03a4593c64 |
| protected | False |
| size | 30705815 |
| status | active |
| updated_at | 2014-12-18T01:07:40.141267 |
+-------------------------+--------------------------------------+
Not sure what to fiddle with next. Additional information:
Hypervisor = XenServer 6.5
Controller and Compute OS = Ubuntu 14.04
Only a single hypervisor and single compute node are in the cluster as of now
didn't see any errors when launching from the CLI/command line
Installed manually from repositories
Version Juno (running apt list | grep nova): nova-compute-xen/trusty-updates 1:2014.2-0ubuntu1~cloud0 all
Addtional: Here is the stack trace from nova-compute doing an automatic disk partition. same thing occurs on 6.5 and 6.2
2015-01-09 13:56:38.008 28668 DEBUG nova.virt.xenapi.vm_utils [-] Cloned VDI OpaqueRef:f8ed5761-e421-4781-0c43-37a662371676 from VDI OpaqueRef:2fb50434-1e6c-5cee-4933-9f68d0f4eebc _clone_vdi /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:568
2015-01-09 13:56:38.612 28668 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "xenapi-image-cachef596113a-bb8b-449a-a72b-40574de81391" lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:238
2015-01-09 13:56:38.613 28668 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "_create_cached_image_impl" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275
2015-01-09 13:56:38.613 28668 INFO nova.virt.xenapi.vm_utils [-] Image creation data, cacheable: True, downloaded: False duration: 1.47 secs for image f596113a-bb8b-449a-a72b-40574de81391
2015-01-09 13:56:39.235 28668 DEBUG nova.virt.xenapi.vmops ...
search for that error message https://ask.openstack.org/en/question... and try to follow the suggestions there
I had already looked through there, and I wasn't able to find anything quite like ours.