Ask Your Question
6

VirtualInterfaceCreateException: Virtual Interface creation failed

asked 2014-04-09 22:40:42 -0500

august gravatar image

updated 2014-04-09 22:47:22 -0500

Hello, I've followed the ubuntu icehouse guide using 14.04 beta on virtualbox. I've made it through the networking section without issues. I can ping my external gateway and seem to be all ok. Then i try to launch an instance and it does with errors about timing out for "vif plugging callback." In the code i can see where this exception is thrown and there is a comment about "# We never heard from Neutron." I'm curious how i can test what it's trying to do to see what might be causing us not to hear back from neutron. All nodes have passed all the tests in the document.

I can't for the life of me figure out what i've got wrong to cause this. Anyone have any idea it's greatly appreciated.

August

error from compute log:

2014-04-10 09:16:39.796 3372 WARNING nova.virt.libvirt.driver [req-bd6b9994-6d14-46a6-bc8d-d5e357761035 f90878d3b5054a948dba93e9fad7a864 93341443727b4a38ab2d38a7fcbfab97] Timeout waiting for vif plugging callback for instance 2f406c15-d686-44af-bc63-056b86365e6e
2014-04-10 09:16:40.504 3372 INFO nova.virt.libvirt.driver [req-bd6b9994-6d14-46a6-bc8d-d5e357761035 f90878d3b5054a948dba93e9fad7a864 93341443727b4a38ab2d38a7fcbfab97] [instance: 2f406c15-d686-44af-bc63-056b86365e6e] Deleting instance files /var/lib/nova/instances/2f406c15-d686-44af-bc63-056b86365e6e
2014-04-10 09:16:40.506 3372 INFO nova.virt.libvirt.driver [req-bd6b9994-6d14-46a6-bc8d-d5e357761035 f90878d3b5054a948dba93e9fad7a864 93341443727b4a38ab2d38a7fcbfab97] [instance: 2f406c15-d686-44af-bc63-056b86365e6e] Deletion of /var/lib/nova/instances/2f406c15-d686-44af-bc63-056b86365e6e complete
2014-04-10 09:16:40.578 3372 ERROR nova.compute.manager [req-bd6b9994-6d14-46a6-bc8d-d5e357761035 f90878d3b5054a948dba93e9fad7a864 93341443727b4a38ab2d38a7fcbfab97] [instance: 2f406c15-d686-44af-bc63-056b86365e6e] Instance failed to spawn
2014-04-10 09:16:40.578 3372 TRACE nova.compute.manager [instance: 2f406c15-d686-44af-bc63-056b86365e6e] Traceback (most recent call last):
2014-04-10 09:16:40.578 3372 TRACE nova.compute.manager [instance: 2f406c15-d686-44af-bc63-056b86365e6e]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1718, in _spawn
2014-04-10 09:16:40.578 3372 TRACE nova.compute.manager [instance: 2f406c15-d686-44af-bc63-056b86365e6e]     block_device_info)
2014-04-10 09:16:40.578 3372 TRACE nova.compute.manager [instance: 2f406c15-d686-44af-bc63-056b86365e6e]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2251, in spawn
2014-04-10 09:16:40.578 3372 TRACE nova.compute.manager [instance: 2f406c15-d686-44af-bc63-056b86365e6e]     block_device_info)
2014-04-10 09:16:40.578 3372 TRACE nova.compute.manager [instance: 2f406c15-d686-44af-bc63-056b86365e6e]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3654, in _create_domain_and_network
2014-04-10 09:16:40.578 3372 TRACE nova.compute.manager [instance: 2f406c15-d686-44af-bc63-056b86365e6e]     raise exception.VirtualInterfaceCreateException()
2014-04-10 09:16:40.578 3372 TRACE nova.compute.manager [instance: 2f406c15-d686-44af-bc63-056b86365e6e] VirtualInterfaceCreateException: Virtual Interface creation failed
2014-04-10 09:16:40.578 3372 TRACE nova.compute.manager [instance: 2f406c15-d686-44af-bc63-056b86365e6e]
edit retag flag offensive close merge delete

Comments

more details:

root@compute1:~# ovs-vsctl show
33343126-0ae0-4da1-8be0-a37ea8bcde29
    Bridge br-int
        Port "qvo3ae4658c-79"
            tag: 1
            Interface "qvo3ae4658c-79"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.0.1"
august gravatar imageaugust ( 2014-04-09 22:58:43 -0500 )edit

Same issue here. On the network node it seems that neutron wants to talk back to nova but connects to localhost instead of the controller ip:

2014-04-10 11:47:53.135 5406 INFO urllib3.connectionpool [-] Starting new HTTP connection (3): 127.0.0.1
2014-04-10 11:47:53.136 5406 ERROR neutron.notifiers.nova [-] Failed to notify nova on events: [{'status': 'completed', 'tag': u'054f9a4c-8065-4d88-a341-ade248cd829d', 'name': 'network-vif-plugged', 'server_uuid': u'5480d6a0-93df-49a2-b7f2-0e446e394479'}]
hrzbrg gravatar imagehrzbrg ( 2014-04-10 07:15:39 -0500 )edit

Same problem here on RDO/Fedora 20/Icehouse. Started seeing this just since yesterday since I had the same automated deployment done two days back without any issues.

arunuke gravatar imagearunuke ( 2014-04-10 15:43:18 -0500 )edit

interesting. on my compute node the ovs-vswitchd.log registers this when i launch an instance (i have vif_plugging_timeout set to 30 to speed the failure up):

2014-04-11T00:11:46.521Z|00022|bridge|INFO|bridge br-int: added interface qvo31b8edb6-19 on port 2
2014-04-11T00:11:58.774Z|00023|ofproto|INFO|br-int: 1 flow_mods 10 s ago (1 deletes)
2014-04-11T00:11:58.774Z|00024|ofproto|INFO|br-tun: 1 flow_mods 10 s ago (1 adds)
2014-04-11T00:12:58.843Z|00025|ofproto|INFO|br-tun: 2 flow_mods 40 s ago (2 deletes)

but nothing on the network node.

august gravatar imageaugust ( 2014-04-10 19:14:21 -0500 )edit

more logs:

when i launch an instance, the controller immediately reports in the neutron server.log:

2014-04-11 10:17:46.091 989 ERROR neutron.notifiers.nova [req-5dd4dd7f-dd36-43c3-be50-2a5297cdeffc None] Failed to notify nova on events: [{'status': 'completed', 'tag': u'bc7bb0da-77a3-4c45-a77d-dcaeb9166171', 'name': 'network-vif-plugged', 'server_uuid': u'751cecf9-5b72-482a-85b1-f55c1f8023f5'}]
2014-04-11 10:17:46.091 989 TRACE neutron.notifiers.nova Traceback (most recent call last):
2014-04-11 10:17:46.091 989 TRACE neutron.notifiers.nova   File "/usr/lib/python2.7/dist-packages/neutron/notifiers/nova.py", line 186, in send_events
2014-04-11 10:17:46.091 989 TRACE neutron.notifiers.nova     batched_events)
2014-04-11 10:17:46.091 989 TRACE neutron.notifiers.nova   File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/contrib/server_external_events.py", line 39, in create
2014-04-11 10:17:46.091 989 TRACE neutron.notifiers.nova     return_raw=True)
2014-04-11 10:17:46.091 989 TRACE neutron.notifiers.nova   File "/usr/lib/python2.7/dist-packages/novaclient/base.py ...
(more)
august gravatar imageaugust ( 2014-04-10 19:20:24 -0500 )edit

10 answers

Sort by ยป oldest newest most voted
9

answered 2014-05-21 08:17:15 -0500

imfaisal87 gravatar image

I have solved my similar situation by adding vif plugin entries in the nova.conf under [DEFAULT] section

vif_plugging_is_fatal: false

vif_plugging_timeout: 0

Hope this helps

edit flag offensive delete link more

Comments

note: add in the compute node

cshuo gravatar imagecshuo ( 2014-11-28 07:49:08 -0500 )edit

great thanks this helped us too.

Moe gravatar imageMoe ( 2015-02-16 00:03:40 -0500 )edit

This helped, Thanks!

muralir gravatar imagemuralir ( 2015-03-05 02:28:44 -0500 )edit

This doesn't seem to be reasonable solution, basically you simply ignore all virtual networking errors, and make nova think everthing's fine while it's not. The root cause is probably somewhere in the neutron.conf: https://bugzilla.redhat.com/show_bug.cgi?id=1087529 (https://bugzilla.redhat.com/show_bug....)

rverchikov gravatar imagerverchikov ( 2015-06-29 14:34:07 -0500 )edit

restart on compute 79 service nova-compute restart 89 service neutron-linuxbridge-agent restart on controller 158 service neutron-l3-agent restart 159 service neutron-dhcp-agent restart 160 service neutron-metadata-agent restart 162 service neutron-linuxbridge-agent restart

mahesh-sawaiker gravatar imagemahesh-sawaiker ( 2015-11-03 03:04:06 -0500 )edit
1

answered 2014-04-10 19:43:59 -0500

august gravatar image

not an answer per se, but this bug looks to be tracking this issue:

https://bugs.launchpad.net/neutron/+b...

edit flag offensive delete link more

Comments

My neutron server was misconfigured for talking with nova for events. It showed up when I followed this thread. Thanks for the post.

kfox gravatar imagekfox ( 2015-02-16 16:56:41 -0500 )edit

Hello, how did you corrected the error ? I am also facing the same and strong doubt that it has to do with neutron-nova config

Sanjay_1987 gravatar imageSanjay_1987 ( 2015-08-17 03:27:19 -0500 )edit
0

answered 2016-05-01 17:34:31 -0500

I was getting the same errors on a 3-node setup, created with packstack. I had configured one of the nodes as controller and running compute and network services on all 3 nodes. Every other instance creationg would log Timeout waiting for vif plugging callback for instance in nova-compute.log and the throw VirtualInterfaceCreateException: Virtual Interface creation failed exception immediately after that.

Setting vif_plugging_is_fatal: false seems to work around the problem, but then I was getting random non-VM network errors, like router ports not being properly attached to private networks after creation (status DOWN).

I turned out I was missing the nova interactions settings in /etc/neutron/neutron.conf on the 2 non-controller nodes:

notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://<controller-ip>:8774/v2
nova_region_name = RegionOne
nova_admin_username = nova
nova_admin_tenant_name = services
nova_admin_password = <your-password>
nova_admin_auth_url = http://<controller-ip>:5000/v2.0
send_events_interval = 2

Packstack configured this only on the first (controller) node. Adding them to the other 2 nodes sovled the issue immediately.

edit flag offensive delete link more
0

answered 2014-10-28 16:06:53 -0500

I ran into the same issue on a 3 node (controller, network, compute) icehouse install on physical servers. In my case, the neturon error logs displayed "ERROR neutron.notifiers.nova Failed to notify nova on events: ['status':'completed', 'name':''network-vif-unplugged', 'server_uuid':'u'f4a..']" The above was due to a configuration error in the neutron.conf file in my controller node. My entry for the field "nova_url" was "http://10.0.0.35:8774/v2/v2" instead of "http://10.0.0.35:8774/v2". Took me a couple of days over the weekend to figure this out.

edit flag offensive delete link more
0

answered 2014-05-21 16:13:49 -0500

Hi, I have similar problem, I'm using devstack on ubuntu 12.04, so how to do? Thanks

Message Virtual Interface creation failed Code 500 Details File "/opt/stack/nova/nova/compute/manager.py", line 301, in decorated_function return function(self, context, args, *kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 2120, in run_instance do_run_instance() File "/opt/stack/nova/nova/openstack/common/lockutils.py", line 249, in inner return f(args, *kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 2119, in do_run_instance legacy_bdm_in_spec) File "/opt/stack/nova/nova/compute/manager.py", line 1237, in _run_instance notify("error", fault=e) # notify that build failed File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/compute/manager.py", line 1221, in _run_instance instance, image_meta, legacy_bdm_in_spec) File "/opt/stack/nova/nova/compute/manager.py", line 1364, in _build_instance network_info.wait(do_raise=False) File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/compute/manager.py", line 1341, in _build_instance set_access_ip=set_access_ip) File "/opt/stack/nova/nova/compute/manager.py", line 401, in decorated_function return function(self, context, args, *kwargs) File "/opt/stack/nova/nova/compute/manager.py", line 1758, in _spawn LOG.exception(_('Instance failed to spawn'), instance=instance) File "/opt/stack/nova/nova/openstack/common/excutils.py", line 82, in __exit__ six.reraise(self.type_, self.value, self.tb) File "/opt/stack/nova/nova/compute/manager.py", line 1755, in _spawn block_device_info) File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 2287, in spawn block_device_info) File "/opt/stack/nova/nova/virt/libvirt/driver.py", line 3699, in _create_domain_and_network raise exception.VirtualInterfaceCreateException()

edit flag offensive delete link more
0

answered 2014-05-09 03:00:26 -0500

selte gravatar image

Hello

I had the same problem ("Timeout waiting for vif plugging callback for instance"), and it can be caused by an invalid Neutron configuration. For me the solution was to fix a typo in /etc/neutron/plugins/ml2/ml2_conf.ini:

I replaced: firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIpTablesFirewallDriver

With: firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

But your problem can be caused by anything that prevents Neutron from working properly. To find the error, you could go to /var/log/neutron on the misbehaving node, and do:

tail -f *

And wait for errors to appear. My error was something like "Module OVSHybridIpTablesFirewallDriver doesn't exist", because of the typo.

edit flag offensive delete link more
0

answered 2014-04-10 13:00:13 -0500

hrzbrg gravatar image

updated 2014-04-10 13:00:57 -0500

I think this was the problem: https://bugs.launchpad.net/neutron/+bug/1301449

(and I had neutron-server running on controller and network...stupid me :) )

edit flag offensive delete link more

Comments

i've only got it on the controller so not my issue. i have played with vif_plugging_timeout = 10 vif_plugging_is_fatal = False and they do allow the instance to boot, but the ports are not plugged so it's useless. i noticed the ubuntu 14.04 icehouse stuff has ml2 hardcoded into a lot of stuff. maybe that's my problem?

august gravatar imageaugust ( 2014-04-10 19:10:45 -0500 )edit
0

answered 2014-05-01 06:15:16 -0500

vpaul gravatar image

I'm experiencing the same issue here but even weird. Sometimes I can spawn several VMs without any issues, sometimes I can spawn only one and only services restart helps. Behaviour is like unpredictable. I'm using openvswitch plugin with vlans. I've tried both linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver and deprecated libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver

A workaroung mentioned above: vif_plugging_timeout = 10 vif_plugging_is_fatal = False

seems helps and networking is ok. There is interface attached with assigned ip. May 1 05:01:32 cirros kern.info kernel: [ 0.958597] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI May 1 05:01:32 cirros kern.info kernel: [ 0.958600] e1000: Copyright (c) 1999-2006 Intel Corporation.

But that's all looks like a bug. I'm on Wheezy though.

edit flag offensive delete link more
0

answered 2014-04-21 04:09:56 -0500

T N S gravatar image

I too got this error With Icehouse in Ubuntu 14.04

2014-04-19 19:35:19.470 20217 INFO nova.virt.libvirt.driver [req-b061faf1-32bc-42d1-a1f4-822fcc9aaca5 3e7ba37aa6cf4472b823c8057a29c217 93a7bf9a8cb64c0da27486e722ce07c1] [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] Deleting instance files /var/lib/nova/instances/bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6 2014-04-19 19:35:19.472 20217 INFO nova.virt.libvirt.driver [req-b061faf1-32bc-42d1-a1f4-822fcc9aaca5 3e7ba37aa6cf4472b823c8057a29c217 93a7bf9a8cb64c0da27486e722ce07c1] [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] Deletion of /var/lib/nova/instances/bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6 complete 2014-04-19 19:35:19.623 20217 ERROR nova.compute.manager [req-b061faf1-32bc-42d1-a1f4-822fcc9aaca5 3e7ba37aa6cf4472b823c8057a29c217 93a7bf9a8cb64c0da27486e722ce07c1] [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] Instance failed to spawn 2014-04-19 19:35:19.623 20217 TRACE nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] Traceback (most recent call last): 2014-04-19 19:35:19.623 20217 TRACE nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1720, in _spawn 2014-04-19 19:35:19.623 20217 TRACE nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] block_device_info) 2014-04-19 19:35:19.623 20217 TRACE nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2253, in spawn 2014-04-19 19:35:19.623 20217 TRACE nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] block_device_info) 2014-04-19 19:35:19.623 20217 TRACE nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 3663, in _create_domain_and_network 2014-04-19 19:35:19.623 20217 TRACE nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] raise exception.VirtualInterfaceCreateException() 2014-04-19 19:35:19.623 20217 TRACE nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6] VirtualInterfaceCreateException: Virtual Interface creation failed 2014-04-19 19:35:19.623 20217 TRACE nova.compute.manager [instance: bf2d05ff-1de4-459b-8b18-3fc2a8ed44e6]

Any workarounds for this issue.

edit flag offensive delete link more
0

answered 2015-11-19 04:38:37 -0500

sidhant13 gravatar image

I got this error in the nova logs of compute node while creating a new instance in openstack liberty. It was because neutron was not able to convey nova that a virtual interface has been created. I made sure I had the following conf in nova section of neutron.conf on controller node so that neutron identifies where the nova service is and conveys of the networking events.

[nova]
auth_url = http://controller:35357
auth_plugin =password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = nova
edit flag offensive delete link more

Comments

Thank you, it solved my issue!!

surabhi gravatar imagesurabhi ( 2017-05-04 01:19:07 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

4 followers

Stats

Asked: 2014-04-09 22:40:42 -0500

Seen: 17,041 times

Last updated: Nov 19 '15