Ask Your Question
0

binding:VIF_Type : binding_failed

asked 2015-11-18 19:20:03 -0600

shahram gravatar image

updated 2015-11-19 12:24:01 -0600

I have a multi-node openstack installation. This is a Highly Available installation, multiple controller nodes and multiple Neutron Network nodes using VRRP (L3HA) with Open vSwitch and ML2 plug-in. All my neutron agents and nova endpoints seem to be in order and up. Launching of an instance keeps failing on my compute node, I have shut all down except one, and I cant find what causes the error, the port binding failure. I have the nova compute in debug mode and have captured the logs. I am including the section only around the instance launch time. I am hoping someone here can point me to the right direction. I have been struggling with this for over a week and have read and checked all postings that were related but none of their possible solutions were applicable to me.

My schedulers show no error. My conductor has an error just saying that the instance can not be created because of unexpected vif-type and it is to be rescheduled. My neutron openvswitch agents on my network nodes have no errors at the time of the launch and they just say the security member got updated. there are some INFO messages about ports not being present on br-int but they appear an hour before I had even launched the instance. My neutron openvswitch agent on the Compute node has similar info as the network nodes

I have tried both kvm and qemu as hypervisor with the same result.

My nova-compute log in Debug mode is attached below. I tried to highlight the errors. everything seems to be in order except when the assigned dhcp ip tries to bind to the vif and I end up with a port status down. It appears to me that the port is marked as down because of the vif bonding failure but I am new to openstack and dont understand the order of events very well.

Thanks in Advance

Highlighted Error

 107 2015-11-18 15:16:05.391 4330 DEBUG keystoneclient.session [-] REQ: curl -g -i -X GET http://controller:9696/v2.0/ports.json?tenant_id=49a02b7bf2bb41d0996e5e2675b03d8b&device_id=9cecb6f2-cbfd-4c2c-9b        bf-9fcf987cb197 -H "User-Agent: python-neutronclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}e2a397d58952037a7b95efe0dd7ea435d6cd6326" _http_log_request /usr/lib/python2.7/site-packag        es/keystoneclient/session.py:195
    108 2015-11-18 15:16:05.439 4330 DEBUG keystoneclient.session [-] RESP: [200] date: Wed, 18 Nov 2015 20:16:05 GMT connection: keep-alive content-type: application/json; charset=UTF-8 content-length: 706         x-openstack-request-id: req-7324c594-b3bc-42f6-9dea-d88fe8957241
    109 
    110 RESP BODY: {"ports": [{"status": "DOWN", "binding:host_id": "compute2.nueos", "allowed_address_pairs": [], "extra_dhcp_opts": [], "device_owner": "compute:nova", "binding:profile": {}, "fixed_ips":         [{"subnet_id": "4e98d6cb-5154-4ebc-90d6-0a059e5e7306", "ip_address": "192.168.1.18"}], "id": "1f864632-fc6f-41e9-a951-971bd4c5820e", "security_groups": ["2c396c40-0a07-4112-9ffe-38699fd1a3fb"], "dev        ice_id": "9cecb6f2-cbfd-4c2c-9bbf-9fcf987cb197", "name": "", "admin_state_up": true, "network_id": "958d7c44-66ba-4ab0-8061-ede751a6b8f9", "tenant_id": "49a02b7bf2bb41d0996e5e2675b03d8b", "binding:v        if_details": {}, "binding:vnic_type": "normal", "binding:vif_type": "binding_failed", "mac_address": "fa:16:3e:59:71:e1"}]}
    111  _http_log_response /usr/lib/python2.7/site-packages/keystoneclient/session.py:224
    112 

135 2015-11-18 15:16:07.375 4330 DEBUG nova.virt.libvirt.driver [req-35c9a4d6-3649-4eed-8f0d-7397c9357379 - - - - -] [instance: 9cecb6f2-cbfd-4c2c-9bbf-9fcf987cb197] Start _get_guest_xml network_info=[V        IF({'profile': {}, 'ovs_interfaceid': None, 'preserve_on_delete': False, 'network ...
(more)
edit retag flag offensive close merge delete

Comments

hi, shahram vif_type binding fail accour when some network resources are not available to instance. and what the nova-scheduler log says and plz paste the errors in code tag so that its is easy to read.

smcas gravatar imagesmcas ( 2015-11-18 22:41:15 -0600 )edit

Thanks for the feedback. I codified the logs in the post. The Schedulers dont have any errors, the conductor has an error just saying the same thing as the nova-compute log.

shahram gravatar imageshahram ( 2015-11-19 11:07:40 -0600 )edit

please check your type_driver and neutron setup also.

smcas gravatar imagesmcas ( 2015-11-19 23:23:38 -0600 )edit
smcas gravatar imagesmcas ( 2015-11-19 23:24:54 -0600 )edit

Thank smca. I managed to launch a node today. The problem was a bad tunnel network configuration! Your last posting was very helpful.

shahram gravatar imageshahram ( 2015-11-20 12:47:07 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-11-20 12:57:10 -0600

shahram gravatar image

My problem was a bad a Tunnel network configuration. ml2_gre_endpoints had an old compute node that now acts as a backup controller listed on there with a wrong interface, management ip instead of tunnel ip. I deleted the record in the database from the neutron/ml2_gre_endpoints database/table. checked with ovs-vsctl to make sure that br-tun looks in order for my setup and launched a node successfully. Thank you all for your help.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

Stats

Asked: 2015-11-18 19:18:38 -0600

Seen: 2,047 times

Last updated: Nov 20 '15