Ask Your Question
3

RDO Kilo - NoValidHost: No valid host was found. There are not enough hosts available.

asked 2015-05-27 20:21:45 -0500

idzzy gravatar image

updated 2015-05-27 20:26:46 -0500

Hello,

I'm using Kilo on three nodes (controller, compute, network) deployed by RDO.

When I create a instance, this error is output in nova-conductor.log.

2015-05-28 10:11:49.463 4197 WARNING nova.scheduler.utils [req-1acb7800-df4b-44dc-870f-3c02d9389000 70c277bb4adf462c9d0e7a6c4e9081be 5a7edd398ad74e07b9ee3d69b7b7ae0e - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 142, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 86, in select_destinations
    filter_properties)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

2015-05-28 10:11:49.464 4197 WARNING nova.scheduler.utils [req-1acb7800-df4b-44dc-870f-3c02d9389000 70c277bb4adf462c9d0e7a6c4e9081be 5a7edd398ad74e07b9ee3d69b7b7ae0e - - -] [instance: cc8335c7-921b-422c-9f06-8087ee15b2e6] Setting instance to ERROR state.

And this error message is output in nova-compute.log

2015-05-28 09:59:17.129 2361 ERROR nova.compute.manager [req-3e97e0b2-819e-4996-aee1-68fc608bd275 - - - - -] No compute node record for host mido54

Here are some outputs. There is no hypervisor, Is this wrong?

# nova service-list
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-consoleauth | mido53 | internal | enabled | up    | 2015-05-28T01:13:23.000000 | -               |
| 2  | nova-scheduler   | mido53 | internal | enabled | up    | 2015-05-28T01:13:24.000000 | -               |
| 3  | nova-conductor   | mido53 | internal | enabled | up    | 2015-05-28T01:13:23.000000 | -               |
| 5  | nova-cert        | mido53 | internal | enabled | up    | 2015-05-28T01:13:24.000000 | -               |
| 6  | nova-compute     | mido54 | nova     | enabled | up    | 2015-05-28T01:13:22.000000 | -               |
+----+------------------+--------+----------+---------+-------+----------------------------+-----------------+

# neutron agent-list
+--------------------------------------+--------------------+--------+-------+----------------+---------------------------+
| id                                   | agent_type         | host   | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+--------+-------+----------------+---------------------------+
| 0766501c-9828-482b-b1aa-7c7b01b9c951 | Open vSwitch agent | mido54 | :-)   | True           | neutron-openvswitch-agent |
| 3e93ac72-c961-48b4-8e32-692dec6f4b9d | Loadbalancer agent | mido55 | :-)   | True           | neutron-lbaas-agent       |
| 4df13ec4-0d0b-4ebb-a324-a8dd5cd4764e | L3 agent           | mido55 | :-)   | True           | neutron-l3-agent          |
| 98fa4bc4-1da3-4ab0-9d13-90d82eb3a2c6 | Loadbalancer agent | mido53 | :-)   | True           | neutron-lbaas-agent       |
| a804ba2d-eea0-4da6-b4c0-28192977acee | Open vSwitch agent | mido55 | :-)   | True           | neutron-openvswitch-agent |
| c6f909c8-6b7c-4e20-9714-581f22cfcd81 | Metadata agent     | mido55 | :-)   | True           | neutron-metadata-agent    |
| cc09072b-0669-49f4-a2c3-c4db884dbdfe | DHCP agent         | mido55 | :-)   | True           | neutron-dhcp-agent        |
+--------------------------------------+--------------------+--------+-------+----------------+---------------------------+

# nova host-list
+-----------+-------------+----------+
| host_name | service     | zone     |
+-----------+-------------+----------+
| mido53    | consoleauth | internal |
| mido53    | scheduler   | internal |
| mido53    | conductor   | internal |
| mido53    | cert        | internal |
| mido54    | compute     | nova     |
+-----------+-------------+----------+

# nova hypervisor-list
+----+---------------------+-------+--------+
| ID | Hypervisor hostname | State | Status |
+----+---------------------+-------+--------+
+----+---------------------+-------+--------+

[compute node]# grep ^virt_type /etc/nova/nova.conf
virt_type=kvm

Similar Questions: https://ask.openstack.org/en/question... https://ask.openstack.org/en/question...

Thank you.

edit retag flag offensive close merge delete

Comments

Resolved.

I enabled ironic. Then, compute_driver of nova.conf was "compute_driver=nova.virt.ironic.IronicDriver"

Changed to "compute_driver=libvirt.LibvirtDriver"

Thanks.

idzzy gravatar imageidzzy ( 2015-05-27 22:22:20 -0500 )edit

Sorry, for reopening your question. It's due to you were complaining issue which came up during initial RC2 Delorean testing RDO Kilo on CentOS 7.1. Actually, my question is :-

Did you enable RDO repo on each Node  touched by packstack run before enabling ironic ?
dbaxps gravatar imagedbaxps ( 2015-05-27 23:52:50 -0500 )edit

In RC2 testing this bug came up after RDO Kilo deployment across several VMs. That's why I hardly believe that enabling Ironic is a correct fix, rather then a hack.

dbaxps gravatar imagedbaxps ( 2015-05-28 00:28:09 -0500 )edit

I did, at first, enabled both repo (rdo-kilo, rdo-manager-release) on all nodes. And then run packstack with answer file which is enabled ironic.

idzzy gravatar imageidzzy ( 2015-05-28 00:46:33 -0500 )edit

Does yum install -y https://rdoproject.org/repos/rdo-release.rpm enable enable rdo-manager-release ?

dbaxps gravatar imagedbaxps ( 2015-05-28 01:12:42 -0500 )edit

5 answers

Sort by » oldest newest most voted
3

answered 2015-08-15 06:57:00 -0500

jasimmk gravatar image

I was having same issue with CentOS 7.1 3.10.0-229.11.1.el7.x86_64

Error @ /var/log/nova/nova-conductor.log

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 142, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 86, in select_destinations
    filter_properties)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

Error @ /var/log/nova/nova-compute.log

[instance: 5676465f-6c5a-4460-9e35-6cd0a2b74c53]     rv = meth(*args, **kwargs)
[instance: 5676465f-6c5a-4460-9e35-6cd0a2b74c53]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 996, in createWithFlags
[instance: 5676465f-6c5a-4460-9e35-6cd0a2b74c53]     if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self)
[instance: 5676465f-6c5a-4460-9e35-6cd0a2b74c53] libvirtError: internal error: process exited while connecting to monitor: 2015-08-15T11:05:42.687877Z qemu-kvm: -chardev pty,id=charserial1: Failed to create chardev
[instance: 5676465f-6c5a-4460-9e35-6cd0a2b74c53]
[instance: 5676465f-6c5a-4460-9e35-6cd0a2b74c53]

I reinstalled centos two times and was having same issue..

What helped me to solve issue was codebauss's method

https://bugzilla.redhat.com/show_bug.cgi?id=1138502

Modify your /etc/fstab and then restart your server:

vi /etc/fstab

You will/should see a line starting with devpts. Copy the line just in case something goes wrong and comment out the original. Modify the copied line to look like this:

devpts        /dev/pts        devpts        gid=5,mode=620    0    0

The error gone, and I was able to create VM without any issues

edit flag offensive delete link more

Comments

1

Thanks, that fixed it for me. To add, there's no need to reboot, you can rejust remount the filesystem: mount -o remount devpts.

netcoder gravatar imagenetcoder ( 2015-08-22 16:57:57 -0500 )edit
0

answered 2016-03-09 07:18:27 -0500

JJJ gravatar image

the same problem happened. And I resolved it by modify the profile /etc/nova/nova.conf find the line "ram_allocation_ratio=1.5", and the edit the "1.5" to "3.0". Then, restart “openstack-nova*”, finally you can launch an instance successfully.

edit flag offensive delete link more

Comments

i have same problem, im getting no valid host found in nova-conductor.log. I tried your solution. It dint work for me. Can you give some other solution. I have installed openstack mitaka.

Spoorthi Vaidya gravatar imageSpoorthi Vaidya ( 2019-05-11 00:23:13 -0500 )edit
0

answered 2015-05-28 02:53:24 -0500

dbaxps gravatar image

To perform three node RDO Kilo Deployment (for now) just run on each node :-

yum -y update
yum install -y https://rdoproject.org/repos/rdo-release.rpm
Only on Controller run :-
yum install -y openstack-packstack
and just in case
yum install -y epel-release ( for CentOS 7.1 only )

Sample of answer-file here : http://bderzhavets.blogspot.com/2015/...

edit flag offensive delete link more
0

answered 2015-06-22 08:58:50 -0500

trown gravatar image

Did you intentionally enable the Ironic driver, or did packstack setup Nova that way? In any case, for a packstack install we do not want to have the Ironic driver enabled. If this happens by default that is a bug.

edit flag offensive delete link more

Comments

I didn't enable the Ironic.

JJJ gravatar imageJJJ ( 2016-03-09 06:32:17 -0500 )edit
0

answered 2017-06-28 22:31:24 -0500

thiyag gravatar image

In my case, packstack allocated very minimal storage to the hypervisor. I verified that extra storage was added and linked to a directory in the /home/ directory, but then couldn't spin up instances (even after deleting the old ones). I had file read permission errors, but the files were also non-existent. I finally found that it was because of selinux. Disabling selinux (which was acceptable in my lab) allowed me to use my instance. Posting this here in case someone has this issue.

Related reading:

https://bugzilla.redhat.com/show_bug.cgi?id=1402561 (https://bugzilla.redhat.com/show_bug....) https://access.redhat.com/articles/1323213 (https://access.redhat.com/articles/13...) Disabling selinux: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security-Enhanced_Linux/sect-Security-Enhanced_Linux-Enabling_and_Disabling_SELinux-Disabling_SELinux.html (https://access.redhat.com/documentati...)

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2015-05-27 20:21:45 -0500

Seen: 32,861 times

Last updated: Mar 09 '16