Horizion Automatic Disk Partitioning Fails, But Manual Works Fine

asked 2014-12-17 20:13:34 -0600

Nate gravatar image

updated 2015-01-09 17:28:51 -0600

We have a fairly new openstack install, and we had previously done everything through the command line, so now we are facing some issues with horizion put on top. The main issue we have is when ever we are launching an instance we get a pop up error "Failed to launch instance": Please try again later [Error: No valid host was found.].

This only happens when using automatic disk partitioning, it deploys just fine when using manual disk partitioning (this is under the advanced options when launching an instance). When digging through the logs on the controller, I get this set of entries out of nova-conductor.log

2014-12-17 18:01:16.684 7997 ERROR nova.scheduler.utils [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Error from last host: compute-1a-01 (node hv-1a-01): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2014, in do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2149, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance ec1ee24a-43ba-44e0-8485-3f8fec803e8e was re-scheduled: ['VM_MISSING_PV_DRIVERS', 'OpaqueRef:398fbf58-5bc2-79f3-e787-40f3ab011d9b']\n"]

2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] NoValidHost exception with message: 'No valid host was found.'

2014-12-17 18:01:16.709 7997 WARNING nova.scheduler.driver [req-db846d30-f52b-41c0-8af8-c384324c2aaf None] [instance: ec1ee24a-43ba-44e0-8485-3f8fec803e8e] Setting instance to ERROR state.

To me this doesn't make sense as our VM's are in hvm mode

glance image-show f596113a-bb8b-449a-a72b-40574de81391
| Property                | Value                                |
| Property 'architecture' | x86_64                               |
| Property 'vm_mode'      | hvm                                  |
| checksum                | 3f113dc3577376f280f13ccc7f7f024f     |
| container_format        | ovf                                  |
| created_at              | 2014-12-18T00:59:33.864002           |
| deleted                 | False                                |
| disk_format             | vhd                                  |
| id                      | f596113a-bb8b-449a-a72b-40574de81391 |
| is_public               | True                                 |
| min_disk                | 0                                    |
| min_ram                 | 0                                    |
| name                    | tinycore                             |
| owner                   | 25c3010a8ebb42cfa361ce03a4593c64     |
| protected               | False                                |
| size                    | 30705815                             |
| status                  | active                               |
| updated_at              | 2014-12-18T01:07:40.141267           |

Not sure what to fiddle with next. Additional information:

Hypervisor = XenServer 6.5

Controller and Compute OS = Ubuntu 14.04

Only a single hypervisor and single compute node are in the cluster as of now

didn't see any errors when launching from the CLI/command line

Installed manually from repositories

Version Juno (running apt list | grep nova): nova-compute-xen/trusty-updates 1:2014.2-0ubuntu1~cloud0 all

Addtional: Here is the stack trace from nova-compute doing an automatic disk partition. same thing occurs on 6.5 and 6.2

2015-01-09 13:56:38.008 28668 DEBUG nova.virt.xenapi.vm_utils [-] Cloned VDI OpaqueRef:f8ed5761-e421-4781-0c43-37a662371676 from VDI OpaqueRef:2fb50434-1e6c-5cee-4933-9f68d0f4eebc _clone_vdi /usr/lib/python2.7/dist-packages/nova/virt/xenapi/vm_utils.py:568
2015-01-09 13:56:38.612 28668 DEBUG nova.openstack.common.lockutils [-] Releasing semaphore "xenapi-image-cachef596113a-bb8b-449a-a72b-40574de81391" lock /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:238
2015-01-09 13:56:38.613 28668 DEBUG nova.openstack.common.lockutils [-] Semaphore / lock released "_create_cached_image_impl" inner /usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py:275
2015-01-09 13:56:38.613 28668 INFO nova.virt.xenapi.vm_utils [-] Image creation data, cacheable: True, downloaded: False duration: 1.47 secs for image f596113a-bb8b-449a-a72b-40574de81391
2015-01-09 13:56:39.235 28668 DEBUG nova.virt.xenapi.vmops ...
edit retag flag offensive close merge delete


search for that error message https://ask.openstack.org/en/question... and try to follow the suggestions there

smaffulli gravatar imagesmaffulli ( 2014-12-18 13:47:14 -0600 )edit

I had already looked through there, and I wasn't able to find anything quite like ours.

Nate gravatar imageNate ( 2014-12-18 14:07:45 -0600 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2014-12-19 23:36:22 -0600

I'd recommend use xenserver 6.2 cuz xenserver 6.5 is only in beta. So main reason may be with unstable xen kernel. Another reason it may your VMtype, try do the same thing with paravirtualized Virtual machine.

edit flag offensive delete link more


Sorry for the delay. But I had installed xen 6.2 and was still presented with the same problems.

Adding more information to the problem. The stack trace coming out of the nova-compute file.

Nate gravatar imageNate ( 2015-01-09 17:26:08 -0600 )edit

Please try to create paravirtualized virtual machine, as i can in your log the issue you facing because VM_MISSING_PV_DRIVERS - it is mean that you are not using xen kernel on vm with xen-tools installed. Please refer to doc regarding Xen & OpenStack

Pavel Kutishchev gravatar imagePavel Kutishchev ( 2015-01-09 23:50:48 -0600 )edit

https://wiki.openstack.org/wiki/XenSe... Look at this page it may help to you and will answer to your questions.

Pavel Kutishchev gravatar imagePavel Kutishchev ( 2015-01-09 23:52:54 -0600 )edit

answered 2015-01-20 07:37:46 -0600

Bob Ball gravatar image

updated 2015-01-20 07:39:46 -0600

I suspect this is that you're running the compute instance in an HVM guest rather than a PV guest. If you're running HVM then you must have the PV drivers installed in that guest, probably as well as the PV tools.

During the automatic disk partitioning step the VHD must be mounted into the Compute OS DomU so we can run fdisk on it. This is only possible if the compute OS DomU has PV drivers installed. Running the tenant VMs as HVM or PV is a red herring, but we'd always recommend the PV drivers are installed in all VMs, HVM or not, as they will give the best performance.

Note that we have only tested the Compute OS DomU as a PV guest; there may be other issues encountered if you use HVM.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2014-12-17 20:12:01 -0600

Seen: 982 times

Last updated: Jan 20 '15