No valid hosts was found

asked 2017-05-03 05:11:30 -0600

DarkKnight gravatar image

I have installed openstack ocata with a single controller node and two compute nodes. I have been able to launch instances successfully, until yesterday. I have not done any configuration changes since it was last working.

Whenever i launch an instance , this is the error i get. (nova-conductor logs)

2017-05-03 15:29:58.447 8859 WARNING nova.scheduler.utils [req-64ce6566-c161-41e1-85f5-a0e3910a3ed5 - - - - -] Failed to compute_task_build_instances: No valid ho
st was found. There are not enough hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 218, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 98, in select_destinations
    dests = self.driver.select_destinations(ctxt, spec_obj)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 79, in select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

2017-05-03 15:29:58.448 8859 WARNING nova.scheduler.utils [req-64ce6566-c161-41e1-85f5-a0e3910a3ed5 - - - - -] [instance: 2d4c1008-3157-45a9-9060-63c8a2034932] Se
tting instance to ERROR state.

Also this is the message i see in the scheduler logs,

2017-05-03 15:29:57.678 8968 DEBUG oslo_messaging._drivers.amqpdriver [-] received message msg_id: 85601c9610834d0dac1902a941004e9f reply to reply_b63e7c34266e44da962c3cd6f62e73b2 __call__ /usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py:194
2017-05-03 15:29:58.213 8968 DEBUG nova.filters [req-64ce6566-c161-41e1-85f5-a0e3910a3ed5 - - - - -] Starting with 0 host(s) get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:70
2017-05-03 15:29:58.214 8968 INFO nova.filters [req-64ce6566-c161-41e1-85f5-a0e3910a3ed5 - - - - -] Filter RetryFilter returned 0 hosts
2017-05-03 15:29:58.215 8968 DEBUG nova.filters [req-64ce6566-c161-41e1-85f5-a0e3910a3ed5 - - - - -] Filtering removed all hosts for the request with instance ID '2d4c1008-3157-45a9-9060-63c8a2034932'. Filter results: [('RetryFilter', None)] get_filtered_objects /usr/lib/python2.7/site-packages/nova/filters.py:129
2017-05-03 15:29:58.215 8968 INFO nova.filters [req-64ce6566-c161-41e1-85f5-a0e3910a3ed5 - - - - -] Filtering removed all hosts for the request with instance ID '2d4c1008-3157-45a9-9060-63c8a2034932'. Filter results: ['RetryFilter: (start: 0, end: 0)']
2017-05-03 15:29:58.215 8968 DEBUG nova.scheduler.filter_scheduler [req-64ce6566-c161-41e1-85f5-a0e3910a3ed5 - - - - -] There are 0 hosts available but 1 instances requested to build. select_destinations /usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py:76
2017-05-03 15:29:58.216 8968 DEBUG oslo_messaging.rpc.server [req-64ce6566-c161-41e1-85f5-a0e3910a3ed5 - - - - -] Expected exception during message handling () _process_incoming /usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py:158

Also below are some of the O/p that you would like to look at:

[root@controller tmp]# nova service-list
/usr/lib/python2.7/site-packages/novaclient/client.py:278: UserWarning: The 'tenant_id' argument is deprecated in Ocata and its use may result in errors in future releases. As 'project_id' is provided, the 'tenant_id' argument will be ignored.
  warnings.warn(msg)
+----+------------------+-------------------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                          | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-------------------------------+----------+---------+-------+----------------------------+-----------------+
| 9  | nova-consoleauth | controller.os.nfv.comviva.com | internal | enabled | up    | 2017-05-03T10:06:56.000000 | -               |
| 10 | nova-scheduler   | controller.os.nfv.comviva.com | internal | enabled | up    | 2017-05-03T10:06:56.000000 | -               |
| 11 | nova-conductor   | controller.os.nfv.comviva.com | internal | enabled | up    | 2017-05-03T10:06:48.000000 | -               |
| 14 | nova-compute     | compute3.os.nfv.comviva.com   | nova     | enabled | up    | 2017-05-03T10:06:55.000000 | -               |
| 15 | nova-compute     | compute2.os.nfv ...
(more)
edit retag flag offensive close merge delete

Comments

Ocata uses the placement service. Before the compute node list is passed to the filters, the placement service removes nodes that don't have enough memory, CPU and storage capacity. Perhaps this is your problem; check the scheduler log for that (or conductor log? I don't remember).

Bernd Bausch gravatar imageBernd Bausch ( 2017-05-03 05:28:28 -0600 )edit

Hmmm, if the problem occurred yesterday, how can you spend a week searching for a solution?

Bernd Bausch gravatar imageBernd Bausch ( 2017-05-03 05:30:17 -0600 )edit

@bernd : I am facing this issue since last week, still unresolved. The o/p are for yesterday that makes you think otherwise which actually is not the case. Also i dont have any relevant error messages in the scheduler and the conductor logs apart from the one that I have already pasted in the qn.

DarkKnight gravatar imageDarkKnight ( 2017-05-04 00:58:05 -0600 )edit

Could you post the output of nova hypervisor-show <COMPUTE> for both your compute nodes?

eblock gravatar imageeblock ( 2017-05-04 02:55:27 -0600 )edit

What was the solution for this issue? I'm hitting the same issue on my newly install ocata setup.

Harsh 5GNR gravatar imageHarsh 5GNR ( 2017-08-25 14:52:19 -0600 )edit