RDO Kilo - NoValidHost: No valid host was found. There are not enough hosts available.
Hello,
I'm using Kilo on three nodes (controller, compute, network) deployed by RDO.
When I create a instance, this error is output in nova-conductor.log.
2015-05-28 10:11:49.463 4197 WARNING nova.scheduler.utils [req-1acb7800-df4b-44dc-870f-3c02d9389000 70c277bb4adf462c9d0e7a6c4e9081be 5a7edd398ad74e07b9ee3d69b7b7ae0e - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 142, in inner
return func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 86, in select_destinations
filter_properties)
File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations
raise exception.NoValidHost(reason=reason)
NoValidHost: No valid host was found. There are not enough hosts available.
2015-05-28 10:11:49.464 4197 WARNING nova.scheduler.utils [req-1acb7800-df4b-44dc-870f-3c02d9389000 70c277bb4adf462c9d0e7a6c4e9081be 5a7edd398ad74e07b9ee3d69b7b7ae0e - - -] [instance: cc8335c7-921b-422c-9f06-8087ee15b2e6] Setting instance to ERROR state.
And this error message is output in nova-compute.log
2015-05-28 09:59:17.129 2361 ERROR nova.compute.manager [req-3e97e0b2-819e-4996-aee1-68fc608bd275 - - - - -] No compute node record for host mido54
Here are some outputs. There is no hypervisor, Is this wrong?
# nova service-list
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-consoleauth | mido53 | internal | enabled | up | 2015-05-28T01:13:23.000000 | - |
| 2 | nova-scheduler | mido53 | internal | enabled | up | 2015-05-28T01:13:24.000000 | - |
| 3 | nova-conductor | mido53 | internal | enabled | up | 2015-05-28T01:13:23.000000 | - |
| 5 | nova-cert | mido53 | internal | enabled | up | 2015-05-28T01:13:24.000000 | - |
| 6 | nova-compute | mido54 | nova | enabled | up | 2015-05-28T01:13:22.000000 | - |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
# neutron agent-list
+--------------------------------------+--------------------+--------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+--------+-------+----------------+---------------------------+
| 0766501c-9828-482b-b1aa-7c7b01b9c951 | Open vSwitch agent | mido54 | :-) | True | neutron-openvswitch-agent |
| 3e93ac72-c961-48b4-8e32-692dec6f4b9d | Loadbalancer agent | mido55 | :-) | True | neutron-lbaas-agent |
| 4df13ec4-0d0b-4ebb-a324-a8dd5cd4764e | L3 agent | mido55 | :-) | True | neutron-l3-agent |
| 98fa4bc4-1da3-4ab0-9d13-90d82eb3a2c6 | Loadbalancer agent | mido53 | :-) | True | neutron-lbaas-agent |
| a804ba2d-eea0-4da6-b4c0-28192977acee | Open vSwitch agent | mido55 | :-) | True | neutron-openvswitch-agent |
| c6f909c8-6b7c-4e20-9714-581f22cfcd81 | Metadata agent | mido55 | :-) | True | neutron-metadata-agent |
| cc09072b-0669-49f4-a2c3-c4db884dbdfe | DHCP agent | mido55 | :-) | True | neutron-dhcp-agent |
+--------------------------------------+--------------------+--------+-------+----------------+---------------------------+
# nova host-list
+-----------+-------------+----------+
| host_name | service | zone |
+-----------+-------------+----------+
| mido53 | consoleauth | internal |
| mido53 | scheduler | internal |
| mido53 | conductor | internal |
| mido53 | cert | internal |
| mido54 | compute | nova |
+-----------+-------------+----------+
# nova hypervisor-list
+----+---------------------+-------+--------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------+-------+--------+
+----+---------------------+-------+--------+
[compute node]# grep ^virt_type /etc/nova/nova.conf
virt_type=kvm
Similar Questions: https://ask.openstack.org/en/question... https://ask.openstack.org/en/question...
Thank you.
Resolved.
I enabled ironic. Then, compute_driver of nova.conf was "compute_driver=nova.virt.ironic.IronicDriver"
Changed to "compute_driver=libvirt.LibvirtDriver"
Thanks.
Sorry, for reopening your question. It's due to you were complaining issue which came up during initial RC2 Delorean testing RDO Kilo on CentOS 7.1. Actually, my question is :-
In RC2 testing this bug came up after RDO Kilo deployment across several VMs. That's why I hardly believe that enabling Ironic is a correct fix, rather then a hack.
I did, at first, enabled both repo (rdo-kilo, rdo-manager-release) on all nodes. And then run packstack with answer file which is enabled ironic.
Does
yum install -y https://rdoproject.org/repos/rdo-release.rpm enable
enable rdo-manager-release ?