Ask Your Question
1

Launching a docker instance in OpenStack RDO Kilo fails with "No valid host was found"

asked 2015-07-06 10:38:43 -0500

holger-king gravatar image

updated 2015-07-16 08:58:18 -0500

Dear RDO community,

after having installed OpenStack Kilo successfully we tried to integrate the docker container solution following http://bderzhavets.blogspot.de/2015/05/running-nova-docker-on-openstack-rdo.html (this recommendation) on a RHEL 7.1 host with the latest RPM packages having installed before.

The docker image we try to launch is: "docker.io/fedora:latest".

Immediately after the docker image start we get the following error message within the dashboard when directly clicking on the instance link:

Message
    No valid host was found. There are not enough hosts available.
Code
    500
Details
    File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 671, in build_instances request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 337, in wrapped return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 52, in select_destinations context, request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 34, in select_destinations context, request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 120, in select_destinations request_spec=request_spec, filter_properties=filter_properties) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 156, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 350, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 341, in _send raise result

The only step mentioned in the above HowTo we did not execute is build and installation of the "python-pip-6.0.8-1.fc23.src.rpm" as it is not available on RHEL 7.1. But even without, the building and installation of the nova docker drived passed successfully:

running install
Requirement already satisfied (use --upgrade to upgrade): pbr>=0.6,!=0.7,<1.0 in /usr/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): Babel>=1.3 in /usr/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): six>=1.9.0 in /usr/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): oslo.serialization>=1.4.0,<1.5.0 in /usr/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): oslo.utils>=1.4.0,<1.5.0 in /usr/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): oslo.config>=1.9.3,<1.10.0 in /usr/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): oslo.concurrency>=1.8.0,<1.9.0 in /usr/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): docker-py>=0.5.1 in /usr/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): pytz>=0a in /usr/lib/python2.7/site-packages (from Babel>=1.3)
Requirement already satisfied (use --upgrade to upgrade): requests>=2.5.2 ...
(more)
edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
1

answered 2015-07-06 13:07:34 -0500

dbaxps gravatar image

updated 2015-07-09 08:10:03 -0500

UPDATE 07/08/2015

I've just installed RDO Kilo on CentOS 7.1 (1503-01)
[root@ServerCentOS71 ~]# rpm -qa |grep  python-pip
python-pip-7.1.0-1.el7.centos.noarch
[root@ServerCentOS71 ~]# rpm -qa |grep  python-pbr
python-pbr-0.10.8-1.el7.noarch
Build NovaDocker driver exactly as advised in HowTo
However, to get access to nova-docker instances , I had set SELINUX to permissive mode

Also

MariaDB [nova]> select service_id,hypervisor_type,hypervisor_version,host_ip,host from compute_nodes;
+------------+-----------------+--------------------+--------------+-------------------------+
| service_id | hypervisor_type | hypervisor_version | host_ip      | host                    |
+------------+-----------------+--------------------+--------------+-------------------------+
|          4 | docker          |             1000 | 192.168.1.87 | ServerCentOS71.localdomain|
+------------+-----------------+---------------------+--------------+------------------------+
1 row in set (0.00 sec)

Sample bellow (larsks test) :-

[root@ServerCentOS71 ~(keystone_admin)]# docker save larsks/thttpd | glance image-create --name larsks/thttpd --is-public true --container-format docker --disk-format raw
[root@ServerCentOS71 ~(keystone_admin)]# curl http://192.168.1.156
<!DOCTYPE html>
<html>
    <head>            
        <title>Your web server is working</title>
    <style type="text/css">
    body {
      text-align: center;
      font-family: Arial,"Helvetica Neue",Helvetica,sans-serif;
    }
    pre {
      border: thin solid black;
      padding: 1em;
      background-color: #c0c0c0;
    }

    #summary {
      max-width: 40em;
      margin: auto;
      text-align: left;
    }
    </style>
    </head>
    <body>
  <div id="header">
  <pre>
  ____                            _         _       _   _                 
 / ___|___  _ __   __ _ _ __ __ _| |_ _   _| | __ _| |_(_) ___  _ __  ___ 
| |   / _ \| '_ \ / _` | '__/ _` | __| | | | |/ _` | __| |/ _ \| '_ \/ __|
| |__| (_) | | | | (_| | | | (_| | |_| |_| | | (_| | |_| | (_) | | | \__ \
 \____\___/|_| |_|\__, |_|  \__,_|\__|\__,_|_|\__,_|\__|_|\___/|_| |_|___/
                  |___/                                                   
  </pre>

  <p><strong>You have a web server.</strong></p>
</div>

Another working docker image

[root@ServerCentOS71 ~(keystone_admin)]#  docker pull rastasheep/ubuntu-sshd:14.04
14.04: Pulling from docker.io/rastasheep/ubuntu-sshd
428b411c28f0: Pull complete 
435050075b3f: Pull complete 
9fd3c8c9af32: Pull complete 
6d4946999d4f: Pull complete 
d079229a6fa1: Pull complete 
247ac73af21b: Pull complete 
78d0a7584fa2: Pull complete 
d73950f2e0a2: Pull complete 
c097bd593519: Pull complete 
3b7a3c6b29f1: Pull complete 
65be69f712e1: Pull complete 
d62fdb66f1e1: Pull complete 
e27692fe739a: Already exists 
Digest: sha256:e0fa62587607c91246cf8e5819d4e420721313bc0b3fcfdfaf07d9c82cf85126
Status: Downloaded newer image for docker.io/rastasheep/ubuntu-sshd:14.04
[root@ServerCentOS71 ~(keystone_admin)]# docker save rastasheep/ubuntu-sshd:14.04 | glance image-create   --is-public=True   --container-format=docker --disk-format=raw --name rastasheep/ubuntu-sshd:14.04
[root@ServerCentOS71 ~(keystone_admin)]# ssh root@192.168.1.157
root@192.168.1.157's password: 
Last login: Wed Jul  8 19:08:27 2015 from 192.168.1.87
root@instance-00000006:~# ps -ef
UID        PID  PPID  C STIME TTY          TIME CMD
root         1     0  0 19:04 ?        00:00:00 /usr/sbin/sshd -D
root        92     1  0 19:30 ?        00:00:00 sshd: root@pts/0    
root        94    92  0 19:30 pts/0    00:00:00 -bash
root       105    94  0 19:30 pts/0    00:00:00 ps -ef

END UPDATE
This error is usually caused by skipping following step on Compute Node

************************************************
Next, create the docker.filters file:
************************************************
$ vi /etc/nova/rootwrap.d/docker.filters

Insert Lines

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run ...
(more)
edit flag offensive delete link more

Comments

I do remember Larsks work http://blog.oddbit.com/2015/01/17/run... and also comment to to hist post.

dbaxps gravatar imagedbaxps ( 2015-07-06 17:01:41 -0500 )edit

Done by Martin Gerhard Loschwitz :-

Also, please note that this kind of setup will impose the danger of oversubscription imho; every hypervisor will appear twice with all its resources in nova-scheduler.
dbaxps gravatar imagedbaxps ( 2015-07-06 17:04:18 -0500 )edit

Rootwrap file "docker.filters" has been created in "/etc/nova/rootwrap.d" before ;)

Enlosed the requested package versions:

[root@rb-openstack-d ~]# rpm -qa | grep -i "python-pbr"
python-pbr-0.10.8-1.el7.noarch
[root@rb-openstack-d ~]# rpm -qa | grep -i "python-pip"
python-pip-1.5.6-5.el7.noarc
holger-king gravatar imageholger-king ( 2015-07-07 02:11:20 -0500 )edit

Regarding the 2 hypervisors being active at a time - you are not right - as one seems to be deleted:

| id | hv_type | hv_version | host_ip      | host| deleted |
|  4 | QEMU    |    1005003 | 10.116.64.10 | xxx |       1 |
|  4 | docker  |       1000 | 10.116.64.10 | xxx |       0 |
holger-king gravatar imageholger-king ( 2015-07-07 02:17:01 -0500 )edit

The virtualizaton driver being set in "nova.conf" is:

compute_driver=novadocker.virt.docker.DockerDriver

Compared to the https://stackoverflow.com/questions/20204430/can-the-docker-driver-on-openstack-coexist-with-libvirt-libvirtdriver (link) you provided it differs from:

compute_driver=docker.DockerDriver
holger-king gravatar imageholger-king ( 2015-07-07 02:25:01 -0500 )edit
0

answered 2015-07-06 12:00:11 -0500

sgordon gravatar image

Try setting debug=True in /etc/nova/nova.conf on the host running the scheduler. Then re-attempt the instance launch, this should give you some additional debugging information in the log including identifying which filter is removing your host(s) from the potential targets for the scheduling operation.

edit flag offensive delete link more

Comments

The debug configuration directive has been activated before. So, we can provide the logs if you want. Alternatively, you can also tell us for which pattern we have to look for ;)

holger-king gravatar imageholger-king ( 2015-07-07 02:44:47 -0500 )edit

[req-... None] Starting with 3 host(s) [req-... None] Filter RetryFilter returned 3 host(s) [req-... None] Filter AvailabilityZoneFilter returned 3 host(s) [req-... None] Filter RamFilter returned 2 host(s) ... [req-... None] Filtered [(localhost.localdomain, localhost.localdomain) ram:3208 disk:71

sgordon gravatar imagesgordon ( 2015-07-10 13:41:08 -0500 )edit

We already found that log output in "nova-scheduler.log". The excerpt you can find https://drive.google.com/file/d/0B6btT4vDRIz_R0dURE1lMFJ6TDQ/view?usp=sharing (here)

holger-king gravatar imageholger-king ( 2015-07-10 14:25:21 -0500 )edit

When taking a look at the log we saw:

Filter ComputeFilter returned 0 hosts

"nova service-list" shows: "nova-compute" service state "DOWN". Even a reboot did not bring it up. Just the https://www.rdoproject.org/forum/discussion/468/nova-compute-shows-down/p1 (following way) brought it up.

holger-king gravatar imageholger-king ( 2015-07-11 09:12:33 -0500 )edit

[root@rb-openstack-d ~(keystone_admin)]# nova service-list

| Id | Binary       | Host           | Zone | Status  | State | Updated_at                 | Disabled Reason |
| 4  | nova-compute | rb-openstack-d | nova | enabled | down  | 2015-07-10T22:18:54.000000 | -               |
holger-king gravatar imageholger-king ( 2015-07-13 06:47:59 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

3 followers

Stats

Asked: 2015-07-06 10:38:43 -0500

Seen: 1,739 times

Last updated: Jul 16 '15