Ask Your Question
0

Can not launch first instance on kilo after a fresh install

asked 2015-06-02 09:50:59 -0500

Gonzo gravatar image

Hi,

I am new in OpenStack. I was following the ubuntu kilo doc step by step without any problem: http://docs.openstack.org/kilo/instal...

And when I am going to launch one instance (from horizon or from the controller console) it always fail with this error:

root@controller:~# cat /var/log/nova/nova-conductor.log
2015-06-02 16:13:35.466 1897 ERROR nova.scheduler.utils [req-2c2b68a3-0f11-4c4a-b14a-0a548fbb340f bf61e9719d39419aaef9173f316c96b6 66822a07a62c41198b3208d3055c4e72 - - -] [instance: d85de798-7be9-4f18-8bca-19daa722020a] Error from last host: node1 (node node1.mydomain.org): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2219, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2362, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance d85de798-7be9-4f18-8bca-19daa722020a was re-scheduled: Unexpected vif_type=binding_failed\n']
2015-06-02 16:13:35.606 1897 WARNING nova.scheduler.utils [req-2c2b68a3-0f11-4c4a-b14a-0a548fbb340f bf61e9719d39419aaef9173f316c96b6 66822a07a62c41198b3208d3055c4e72 - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/server.py", line 142, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/manager.py", line 86, in select_destinations
    filter_properties)

  File "/usr/lib/python2.7/dist-packages/nova/scheduler/filter_scheduler.py", line 80, in select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

2015-06-02 16:13:35.616 1897 WARNING nova.scheduler.utils [req-2c2b68a3-0f11-4c4a-b14a-0a548fbb340f bf61e9719d39419aaef9173f316c96b6 66822a07a62c41198b3208d3055c4e72 - - -] [instance: d85de798-7be9-4f18-8bca-19daa722020a] Setting instance to ERROR state

On the node1 I can see this:

2015-06-02 16:13:34.271 7982 ERROR nova.compute.manager [req-fc155802-542e-46d0-95b2-c845b1f403e9 - - - - -] [instance: d85de798-7be9-4f18-8bca-19daa722020a] Instance failed to spawn
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a] Traceback (most recent call last):
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2442, in _build_resources
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     yield resources
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2314, in _build_and_run_instance
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     block_device_info=block_device_info)
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2351, in spawn
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     write_to_disk=True)
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4172, in _get_guest_xml
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     context)
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4043, in _get_guest_config
2015-06-02 16:13:34.271 7982 TRACE nova.compute.manager [instance: d85de798-7be9-4f18-8bca-19daa722020a]     flavor, virt_type)
2015-06-02 ...
(more)
edit retag flag offensive close merge delete

Comments

i can see there is an error saying Unexpected vif_type=binding_failed. this error normally cause by neuron issue. you might need to check your ml2 config and also try to delete your br-int and add again br-int then reboot your server again. happened to me before and the steps above solve my issue.

senyapsudah gravatar imagesenyapsudah ( 2015-06-07 07:13:04 -0500 )edit

4 answers

Sort by ยป oldest newest most voted
0

answered 2015-11-02 23:43:50 -0500

hi everyone

I have the same problem. how can i solve it? thanks in advance

edit flag offensive delete link more
0

answered 2015-06-09 09:13:53 -0500

Gonzo gravatar image

Thank you for your responses. If really the "Unexpected vif_type=binding_failed" error is caused by one neutron issue, I don't find where it is.

If I list neutron agents or nova services from the controller, everything looks fine:

~# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 22532bd3-506a-4b06-b8d8-d793723d2b19 | Open vSwitch agent | node1      | :-)   | True           | neutron-openvswitch-agent |
| 513698cd-7325-4e49-a82f-34c77987bc5e | DHCP agent         | controller | :-)   | True           | neutron-dhcp-agent        |
| 5ab02c59-bddc-47e4-92a5-8eb02e90227f | L3 agent           | controller | :-)   | True           | neutron-l3-agent          |
| a824fd0c-394d-4ea2-9190-c2fc935c4b67 | Open vSwitch agent | controller | :-)   | True           | neutron-openvswitch-agent |
| e3d33150-1bb9-4446-8dda-ea232839a4be | Metadata agent     | controller | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+

root@controller:~# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-scheduler   | controller | internal | enabled | up    | 2015-06-09T14:12:54.000000 | -               |
| 2  | nova-consoleauth | controller | internal | enabled | up    | 2015-06-09T14:12:53.000000 | -               |
| 3  | nova-conductor   | controller | internal | enabled | up    | 2015-06-09T14:12:54.000000 | -               |
| 5  | nova-cert        | controller | internal | enabled | up    | 2015-06-09T14:12:52.000000 | -               |
| 6  | nova-compute     | node1      | nova     | enabled | up    | 2015-06-09T14:13:01.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

Nova and openvswitch are up and running in the compute node.

And these are my "ovs-vsctl show" of my controller and node:

root@controller:~# ovs-vsctl show
d6f7af86-699a-4837-8d28-40d5ea484c1d
    Bridge br-int
        fail_mode: secure
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qr-520ac3c9-10"
            Interface "qr-520ac3c9-10"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "tap05c7f43b-00"
            Interface "tap05c7f43b-00"
                type: internal
        Port "qg-9cded0be-65"
            Interface "qg-9cded0be-65"
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-ex
        Port "eth2"
            Interface "eth2"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.3.0"

root@node1:/var/log/neutron# ovs-vsctl show
65ea4be8-de83-41f5-94fc-b471873ee35a
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-tun
        fail_mode: secure
        Port br-tun
            Interface br-tun
                type: internal
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    Bridge br-ex
        Port "eth1"
            Interface "eth1"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "2.3.0"

Any help will be much apreciated

edit flag offensive delete link more

Comments

wait. I have found that my default "/etc/neutron/plugins/ml2/ml2_conf.ini" template did not contain the [ovs] section. However I have added it at the end. Like this:

[ovs]
local_ip = 192.168.2.100
bridge_mappings = external:br-ex

[agent]
tunnel_types = gre

Is that right?

Gonzo gravatar imageGonzo ( 2015-06-09 09:28:18 -0500 )edit
0

answered 2015-06-06 22:27:38 -0500

Kui.Li gravatar image

What status of nova services executing "nova service-list", also status of neutron agents "neutron agent-list"? Service Nova-compute and/or neutron-plugin-openswitch-agent is down on compute node, creating the instances will fail.

edit flag offensive delete link more
0

answered 2015-06-06 17:33:07 -0500

capsali gravatar image

what does ovs-vsctl show output? It might be a br-int or br-tun problem. Do you have any ports under br-tun on both controller and compute nodes?

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-06-02 09:50:59 -0500

Seen: 2,367 times

Last updated: Jun 09 '15