Ask Your Question
0

Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

asked 2012-11-15 22:06:12 -0500

martin-loschwitz gravatar image

Hello all,

I'm running an OpenStack setup here with Quantum+OVS. As I run all services on one node, I need use_namespaces=true in my configuration file. With that, I can set up Quantum & OpenvSwitch nicely, but as soon as I fire up a VM, all the interfaces created by Quantum and OpenVSwitch silently disappear:

35: qbr54d531d7-f3: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UP link/ether 8e:7d:9b:e9:6d:87 brd ff:ff:ff:ff:ff:ff inet6 fe80::3871:9ff:fef7:16f7/64 scope link valid_lft forever preferred_lft forever 36: qvo54d531d7-f3: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether ea:7f:8f:ca:f7:6c brd ff:ff:ff:ff:ff:ff inet6 fe80::e87f:8fff:feca:f76c/64 scope link valid_lft forever preferred_lft forever 37: qvb54d531d7-f3: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast master qbr54d531d7-f3 state UP qlen 1000 link/ether 8e:7d:9b:e9:6d:87 brd ff:ff:ff:ff:ff:ff inet6 fe80::8c7d:9bff:fee9:6d87/64 scope link valid_lft forever preferred_lft forever 38: qbrc004d01b-cf: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UP link/ether a2:08:7e:7a:5e:47 brd ff:ff:ff:ff:ff:ff inet6 fe80::c072:a6ff:fea7:b450/64 scope link valid_lft forever preferred_lft forever 39: qvoc004d01b-cf: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether f6:28:36:fc:8a:c0 brd ff:ff:ff:ff:ff:ff inet6 fe80::f428:36ff:fefc:8ac0/64 scope link valid_lft forever preferred_lft forever 40: qvbc004d01b-cf: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast master qbrc004d01b-cf state UP qlen 1000 link/ether a2:08:7e:7a:5e:47 brd ff:ff:ff:ff:ff:ff inet6 fe80::a008:7eff:fe7a:5e47/64 scope link valid_lft forever preferred_lft forever 41: vnet0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast master qbr54d531d7-f3 state UNKNOWN qlen 500 link/ether fe:16:3e:c4:70:a8 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fec4:70a8/64 scope link valid_lft forever preferred_lft forever 42: vnet1: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast master qbrc004d01b-cf state UNKNOWN qlen 500 link/ether fe:16:3e:8e:52:dc brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe8e:52dc/64 scope link valid_lft forever preferred_lft forever

With use_namespaces=False in my configuration, this doesn't happen.

My configuration mainly follows Emilien Macchi's howto; it's using GRE.

I hope anyone can help me on this one!

Best regards Martin G. Loschwitz

edit retag flag offensive close merge delete

16 answers

Sort by ยป oldest newest most voted
0

answered 2012-11-16 00:11:17 -0500

gongysh gravatar image

What hypervisor are u using? Xen?

edit flag offensive delete link more
0

answered 2012-11-15 22:07:59 -0500

martin-loschwitz gravatar image

Also interfaces apparently disappear silently, quantum shows a lot of messages like this one:

Nov 15 23:07:09|01029|netdev_linux|INFO|ioctl(SIOCGIFHWADDR) on tap1935fdda-34 device failed: No such device

edit flag offensive delete link more
0

answered 2012-11-16 00:51:52 -0500

danwent gravatar image

by "disappear", do you mean they are moved from the root namespace to a different namespace? If so, that is expected. Which actually devices in your example above disappear?

edit flag offensive delete link more
0

answered 2012-11-16 01:01:38 -0500

martin-loschwitz gravatar image

What I mean by "disappear" is that they simply are not present in "ip a" anymore, and it looks like openvswitch can't handle them properly:

root@alice:/var/run/openvswitch# grep "No such device" /var/log/openvswitch/ovs-vswitchd.log | wc -l 3420

A lot of entries like this one:

Nov 16 01:57:27|01154|netdev|WARN|failed to get flags for network device tap1935fdda-34: No such device

And numerous other device names.

Are you on Freenode by any chance? My nickname there is madkiss, so if you prefer to communicate that way, we could do that and I will post a summary in here if we manage to solve the problem.

edit flag offensive delete link more
0

answered 2012-11-16 00:42:37 -0500

martin-loschwitz gravatar image

Dan, that is correct. posting before and after is somewhat difficult as the devices disappear the same second they are created.

edit flag offensive delete link more
0

answered 2012-11-15 22:13:01 -0500

martin-loschwitz gravatar image

Here is what "ip netns | xargs -i ip netns exec {} ip a" returns -- there, the devices do have their IPs ...

root@alice:/etc/init# ip netns | xargs -i ip netns exec {} ip a 8: tap1935fdda-34: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:46:72:69 brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/24 brd 10.0.0.255 scope global tap1935fdda-34 inet6 fe80::f816:3eff:fe46:7269/64 scope link valid_lft forever preferred_lft forever 16: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 15: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 47: qr-0dafbc88-a4: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:f8:97:36 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-0dafbc88-a4 inet6 fe80::f816:3eff:fef8:9736/64 scope link valid_lft forever preferred_lft forever 48: qg-94813a40-e8: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:e6:cd:3d brd ff:ff:ff:ff:ff:ff inet 192.168.144.100/24 brd 192.168.144.255 scope global qg-94813a40-e8 inet6 fe80::f816:3eff:fee6:cd3d/64 scope link valid_lft forever preferred_lft forever

And what's also interesting is that for the ports of my br-int bridge, they're all down:

root@alice:/etc/init# sudo ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000966d76133843 n_tables:255, n_buffers:256 features: capabilities:0xc7, actions:0xfff 1(tap1935fdda-34): addr:88:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN 3(tap37ab6888-42): addr:76:de:d5:87:27:ce config: PORT_DOWN state: LINK_DOWN 14(patch-tun): addr:1e:41:08:d4:e5:ea config: 0 state: 0 15(qr-0dafbc88-a4): addr:89:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN LOCAL(br-int): addr:96:6d:76:13:38:43 config: PORT_DOWN state: LINK_DOWN OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0

edit flag offensive delete link more
0

answered 2012-11-15 23:54:04 -0500

danwent gravatar image

can you show a "before" and "after" of the devices disappearing?

the list of devices in your initial post are only the devices associated with VMs, so those would be in the root namespace.

the list of devices in post update #2 are the devices within the router namespace, is that correct?

edit flag offensive delete link more
0

answered 2012-11-16 00:08:10 -0500

gongysh gravatar image

I think in #2: device 8 and 16 is in dhcp namespace, others are in router namespace.

in #1, all are in root namespace.

If we deploy in multiple nodes env: all interfaces in #1 should be on compute node all interfaces in #2 should be on network node

edit flag offensive delete link more
0

answered 2012-11-16 00:14:32 -0500

martin-loschwitz gravatar image

This is a one node setup only. Am 16.11.2012 01:11 schrieb "yong sheng gong" < question214317@answers.launchpad.net >:

Your question #214317 on quantum changed: https://answers.launchpad.net/quantum/+question/214317 (https://answers.launchpad.net/quantum...)

Status: Needs information => Answered

yong sheng gong proposed the following answer: I think in #2: device 8 and 16 is in dhcp namespace, others are in router namespace.

in #1, all are in root namespace.

If we deploy in multiple nodes env: all interfaces in #1 should be on compute node all interfaces in #2 should be on network node


If this answers your question, please go to the following page to let us know that it is solved: https://answers.launchpad.net/quantum/+question/214317/+confirm?answer_id=3 (https://answers.launchpad.net/quantum...)

If you still need help, you can reply to this email or go to the following page to enter your feedback: https://answers.launchpad.net/quantum/+question/214317 (https://answers.launchpad.net/quantum...)

You received this question notification because you asked the question.

edit flag offensive delete link more
0

answered 2012-11-16 00:14:48 -0500

martin-loschwitz gravatar image

kvm Am 16.11.2012 01:11 schrieb "yong sheng gong" < question214317@answers.launchpad.net >:

Your question #214317 on quantum changed: https://answers.launchpad.net/quantum/+question/214317 (https://answers.launchpad.net/quantum...)

yong sheng gong requested more information: What hypervisor are u using? Xen?


To answer this request for more information, you can either reply to this email or enter your reply at the following page: https://answers.launchpad.net/quantum/+question/214317 (https://answers.launchpad.net/quantum...)

You received this question notification because you asked the question.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2012-11-15 22:06:12 -0500

Seen: 65 times

Last updated: Nov 23 '12