Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

NovaException: Unexpected vif_type=binding_failed

Hello,

I'm getting error "NovaException: Unexpected vif_type=binding_failed" when I launch an instance with just external network or with external and internal (tenant) network (if instance has only internal (tenant) network, it runs OK.

How can I find my problem?

Thanks.

NovaException: Unexpected vif_type=binding_failed

Hello,

I'm getting error "NovaException: Unexpected vif_type=binding_failed" vif_type=binding_failed" when I launch an instance with just external network or with external and internal (tenant) network (if instance has only internal (tenant) network, it runs OK.

How can I find my problem?

Thanks.

NovaException: Unexpected vif_type=binding_failed

Hello,

I'm getting error "NovaException: Unexpected vif_type=binding_failed" when I launch an instance with just external network or with external and internal (tenant) network (if instance has only internal (tenant) network, it runs OK.

How can I find my problem?

Thanks.Thanks. /var/log/nova/nova-conductor.log (in controller+network node):

2016-01-29 12:56:59.453 1732 INFO oslo.messaging._drivers.impl_rabbit [-] Connected to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:57:30.583 1733 INFO oslo.messaging._drivers.impl_rabbit [req-033ec793-e8c6-4783-bbde-920552217e40 ] Connecting to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:57:30.606 1733 INFO oslo.messaging._drivers.impl_rabbit [req-033ec793-e8c6-4783-bbde-920552217e40 ] Connected to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:57:48.801 1733 ERROR nova.scheduler.utils [req-033ec793-e8c6-4783-bbde-920552217e40 None] [instance: 96113058-afdf-4d14-a746-b71ddfb45802] Error from last host: compute2 (node compute2): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2033, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2164, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 96113058-afdf-4d14-a746-b71ddfb45802 was re-scheduled: Unexpected vif_type=binding_failed\n']
    2016-01-29 12:57:48.868 1733 WARNING nova.scheduler.driver [req-033ec793-e8c6-4783-bbde-920552217e40 None] [instance: 96113058-afdf-4d14-a746-b71ddfb45802] NoValidHost exception with message: 'No valid host was found.'
    2016-01-29 12:57:48.868 1733 WARNING nova.scheduler.driver [req-033ec793-e8c6-4783-bbde-920552217e40 None] [instance: 96113058-afdf-4d14-a746-b71ddfb45802] Setting instance to ERROR state.
    2016-01-29 12:59:18.895 1732 INFO oslo.messaging._drivers.impl_rabbit [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 ] Connecting to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:59:18.921 1732 INFO oslo.messaging._drivers.impl_rabbit [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 ] Connected to AMQP server on CONTROLLER_IP:5672
    2016-01-29 12:59:26.897 1733 ERROR nova.scheduler.utils [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 None] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Error from last host: compute2 (node compute2): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2033, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2164, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6 was re-scheduled: Unexpected vif_type=binding_failed\n']
    2016-01-29 12:59:26.962 1733 WARNING nova.scheduler.driver [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 None] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] NoValidHost exception with message: 'No valid host was found.'
    2016-01-29 12:59:26.964 1733 WARNING nova.scheduler.driver [req-22d6a697-ee03-4fed-9796-7a81ca4b3f07 None] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Setting instance to ERROR state.

/var/log/nova/nova-compute.log (in compute node):

2016-01-29 12:59:25.911 982 WARNING nova.virt.disk.vfs.guestfs [-] Failed to close augeas aug_close: do_aug_close: you must call 'aug-init' first to initialize Augeas 2016-01-29 12:59:26.043 982 ERROR nova.compute.manager [-] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Instance failed to spawn 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] Traceback (most recent call last): 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2246, in _build_resources 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  yield resources 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2116, in _build_and_run_instance 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  block_device_info=block_device_info) 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2620, in spawn 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  write_to_disk=True) 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4159, in _get_guest_xml 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  context) 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 3937, in _get_guest_config 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  flavor, CONF.libvirt.virt_type) 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6]  File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/vif.py", line 352, in get_config 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] 
    _("Unexpected vif_type=%s") % vif_type) 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] NovaException: Unexpected vif_type=binding_failed 2016-01-29 12:59:26.043 982 TRACE nova.compute.manager [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] 2016-01-29 12:59:26.051 982 WARNING nova.virt.libvirt.driver [-] [instance: 2fabe1af-19c9-43ab-a0c7-c40b5d5a08c6] During wait destroy, instance disappeared. 2016-01-29 13:01:27.107 982 WARNING nova.compute.manager [-] Bandwidth usage not supported by hypervisor. 2016-01-29 13:03:05.298 982 WARNING nova.compute.manager [-] While synchronizing instance power states, found 2 instances in the database and 0 instances on the hypervisor.

/etc/neutron/plugins/ml2/ml2.conf.ini (in network node):

[ml2]
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
flat_networks = external
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 192.168.65.176
enable_tunneling = True
bridge_mappings = external:br-ex
[agent]
tunnel_types = gre
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

/etc/neutron/plugins/ml2/ml2_conf.ini: (in compute node)

[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[ovs]
local_ip = 192.168.65.178
enable_tunneling = True
[agent]
tunnel_types = gre

ls -lah /etc/neutron (in controller-network and compute):

[root@controller openvswitch]# ls -lha /etc/neutron/
total 104K
drwxr-xr-x.  3 root root    4,0K ene 29 10:23 .
drwxr-xr-x. 93 root root    8,0K ene 29 12:50 ..
-rw-r-----.  1 root neutron 3,6K ene 27 12:09 dhcp_agent.ini
-rw-r-----.  1 root neutron  109 abr 10  2015 fwaas_driver.ini
-rw-r-----.  1 root neutron 4,1K ene 27 12:09 l3_agent.ini
-rw-r-----.  1 root neutron 1,6K abr 10  2015 lbaas_agent.ini
-rw-r-----.  1 root neutron 2,3K ene 27 12:09 metadata_agent.ini
-rw-r-----.  1 root neutron  24K ene 27 12:09 neutron.conf
-rw-r-----.  1 root neutron  23K may 27  2015 neutron.conf.bak
lrwxrwxrwx.  1 root root      37 ene 27 12:02 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
drwxr-xr-x.  4 root root      34 ene 27 12:09 plugins
-rw-r-----.  1 root neutron 5,9K abr 10  2015 policy.json
-rw-r--r--.  1 root root      78 may 27  2015 release

ovs-vsctl show (in network node):

[root@controller openvswitch]# ovs-vsctl show
313f08c3-7c3d-4f3b-8341-b50ec5c1be31
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "tap96f7a4b0-db"
            tag: 1
            Interface "tap96f7a4b0-db"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port "eth2"
            Interface "eth2"
    Bridge br-tun
        fail_mode: secure
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "gre-c0a841b1"
            Interface "gre-c0a841b1"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.65.176", out_key=flow, remote_ip="192.168.65.177"}
        Port "gre-c0a841b2"
            Interface "gre-c0a841b2"
                type: gre
                options: {df_default="true", in_key=flow, local_ip="192.168.65.176", out_key=flow, remote_ip="192.168.65.178"}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.3.1"

Remember that my scenario is a controller+network server with 3 nics (managment, data and external, but MGMT and EXT share same physical network) and 2 computes with 2 NICS (managment and data), running all CentOS 7 and OpenStack Juno