Ask Your Question

martin-loschwitz's profile - activity

2015-07-28 08:29:02 -0500 received badge  Famous Question (source)
2015-07-28 08:29:02 -0500 received badge  Popular Question (source)
2015-07-28 08:29:02 -0500 received badge  Notable Question (source)
2013-02-09 18:40:20 -0500 answered a question New VMs are automatically associated with all available networks

Thanks yong sheng gong, that solved my question.

2013-01-29 20:19:32 -0500 asked a question New VMs are automatically associated with all available networks

Folks,

I am seeing a strange problem here. Whenever I boot up a virtual machine from the command line via "nova boot", it immediately appears to be associated with all available networks:

+--------------------------------------+------------------+--------+-----------------------------------------+ | ID | Name | Status | Networks | +--------------------------------------+------------------+--------+-----------------------------------------+ | 6207a2a1-ca04-494d-bdbb-b4e58e4f0155 | fluxcapacitor | ACTIVE | admin-net=10.5.5.3; ext_net=10.20.7.132 | | 5b920849-c24f-4eb0-98b1-ed31b33be5b8 | superfrobnicator | ACTIVE | admin-net=10.5.5.4 | +--------------------------------------+------------------+--------+-----------------------------------------+

If I boot a VM from the dashboard I can select which nets I want it to associated with (and it works). In the above paste example, the first VM was booted via the command line and the second was booted via the dashboard with the appropriate networks selected.

ext_net is the network where my floating IPs would be coming from; please note that the newly started VMs are not assigned proper floating IPs. I looks like quantum creates a port for the VM but does not associate a floating Ip with it.

Please help!

Best regards Martin

2012-12-07 07:54:06 -0500 answered a question OpenVSwitch flows missing on newly created setup

I can confirm that adding a network-type to the command line when creating the networks worked. I will convert this into a bug against the OpenStack documentation.

2012-12-06 22:49:35 -0500 answered a question OpenVSwitch flows missing on newly created setup

I used a modified version of Emilien Macchi's script, and as far as I can tell, the network in it gets created with

quantum net-create --tenant_id $tenant_id some-net

So in fact, i would expect it to be a local network. I will report back as soon as I could get my hands back on the servers (tomorrow) and was able to figure out whether this actually fixed the problem.

2012-12-06 21:48:45 -0500 answered a question OpenVSwitch flows missing on newly created setup

Yes, I am pretty sure that the quantum-openvswitch-agents are running in the second setup; I have restarted them on both hosts numerous times.

I have also compared "ovs-vsctl show" on both setups (the working and the broken one), and they both looked identical -- this is from the compute node:

Bridge br-int
    Port "vnet0"
        tag: 1
        Interface "vnet0"
    Port br-int
        Interface br-int
            type: internal
    Port "vnet1"
        tag: 2
        Interface "vnet1"
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
Bridge br-tun
    Port "gre-3"
        Interface "gre-3"
            type: gre
            options: {in_key=flow, out_key=flow, remote_ip="192.168.133.113"}
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port br-tun
        Interface br-tun
            type: internal
ovs_version: "1.4.0+build0"

and this is from the computing node:

Bridge br-int
    Port "tap9885f7e6-c8"
        tag: 1
        Interface "tap9885f7e6-c8"
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port "qr-5dd26f35-67"
        tag: 1
        Interface "qr-5dd26f35-67"
            type: internal
    Port br-int
        Interface br-int
            type: internal
Bridge br-tun
    Port "gre-2"
        Interface "gre-2"
            type: gre
            options: {in_key=flow, out_key=flow, remote_ip="192.168.133.112"}
    Port br-tun
        Interface br-tun
            type: internal
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
Bridge br-ex
    Port br-ex
        Interface br-ex
            type: internal
    Port "qg-6bcaf016-b1"
        Interface "qg-6bcaf016-b1"
            type: internal
    Port "eth2"
        Interface "eth2"
ovs_version: "1.4.0+build0"

Where would the l2 agent get the information on how to create/modify the flows?

2012-12-06 21:14:48 -0500 answered a question OpenVSwitch flows missing on newly created setup

Is this possibly caused by the fact that I did not specify "--provider:network_type gre --provider:segmentation_id <tunnel-id>" when creating the internal network?

2012-12-06 18:26:31 -0500 answered a question OpenVSwitch flows missing on newly created setup

This is what I see in the working setup:

root@charlie:~# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6410.167s, table=0, n_packets=549, n_bytes=126580, priority=1 actions=NORMAL

root@charlie:~# ovs-ofctl dump-flows br-tun NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6401.687s, table=0, n_packets=113, n_bytes=36856, priority=3,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=mod_vlan_vid:1,output:1 cookie=0x0, duration=6402.01s, table=0, n_packets=218, n_bytes=44126, priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x1,NORMAL cookie=0x0, duration=6401.38s, table=0, n_packets=214, n_bytes=46174, priority=3,tun_id=0x1,dl_dst=fa:16:3e:26:d7:1d actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=6400.522s, table=0, n_packets=0, n_bytes=0, priority=3,tun_id=0x1,dl_dst=fa:16:3e:bf:11:9c actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=6408.66s, table=0, n_packets=4, n_bytes=308, priority=1 actions=drop

On the broken setup, all I see is

root@charlie:~# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6410.167s, table=0, n_packets=549, n_bytes=126580, priority=1 actions=NORMAL

root@charlie:~# ovs-ofctl dump-flows br-tun NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6408.66s, table=0, n_packets=4, n_bytes=308, priority=1 actions=drop

2012-12-06 18:21:09 -0500 asked a question OpenVSwitch flows missing on newly created setup

I am seeing a strange effect here in an openstack setup. i have a working installation, where i can do a "ovs-ofctl dump-flows br-int" or "br-tun", and I am seeing a lot of defined flows.

On a diferent setup, where I realized that my computing nodes can't talk to the network node. I did see a lot of ovs-ofctl commands being executed by the quantum-plugin-openvswitch-agent on these nodes where it's working, and these are completely missing from the setup where stuff is broken.

I wonder when they are created by the quantum plugin openvswitch agent, and why they exist in the first setup, but not in the second (both are using exactly the same configuration, but the order that the setup was created was different).

So what's wrong here?

2012-11-23 13:27:42 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

Turns out I was just too stupid for this. The interfaces did not "disappear", they were just moved into different network namespaces, exactly the way it is supposed to happen with use_namespaces=true.

2012-11-16 01:22:21 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

Okay, so let's bring some order into this. Here is what I can see before I start anything related to Quantum (openvswitch is running, though):

root@alice:~# ip a 1: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 169.254.169.254/32 scope link lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:10:ab:99:1c brd ff:ff:ff:ff:ff:ff inet 192.168.122.111/24 brd 192.168.122.255 scope global eth0 inet6 fe80::5054:10ff:feab:991c/64 scope link valid_lft forever preferred_lft forever 3: eth1: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:10:cd:a9:74 brd ff:ff:ff:ff:ff:ff inet 192.168.133.111/24 brd 192.168.133.255 scope global eth1 inet6 fe80::5054:10ff:fecd:a974/64 scope link valid_lft forever preferred_lft forever 4: eth2: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:10:ef:a9:74 brd ff:ff:ff:ff:ff:ff inet6 fe80::5054:10ff:feef:a974/64 scope link valid_lft forever preferred_lft forever 6: br-int: <broadcast,multicast> mtu 1500 qdisc noop state DOWN link/ether 9e:67:7a:08:10:4d brd ff:ff:ff:ff:ff:ff 7: br-ex: <broadcast,multicast> mtu 1500 qdisc noop state DOWN link/ether 6a:59:e1:58:45:40 brd ff:ff:ff:ff:ff:ff 12: br-tun: <broadcast,multicast> mtu 1500 qdisc noop state DOWN link/ether 5e:6e:e0:5d:f7:41 brd ff:ff:ff:ff:ff:ff

And after starting Quantum, I see this:

root@alice:~# ip a 1: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 169.254.169.254/32 scope link lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:10:ab:99:1c brd ff:ff:ff:ff:ff:ff inet 192.168.122.111/24 brd 192.168.122.255 scope global eth0 inet6 fe80::5054:10ff:feab:991c/64 scope link valid_lft forever preferred_lft forever 3: eth1: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 52:54:10:cd:a9:74 brd ff:ff:ff:ff:ff:ff inet 192.168.133.111/24 brd 192.168.133.255 scope global eth1 inet6 fe80::5054:10ff:fecd:a974/64 scope link valid_lft forever preferred_lft forever 4: eth2: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast ... (more)

2012-11-16 01:08:29 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

Here are some messages that look suspicious -- I see these when firing up the quantum OVS agent:

Nov 16 02:07:32 alice ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 add-port br-int patch-tun Nov 16 02:07:32 alice ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Interface patch-tun type=patch Nov 16 02:07:32 alice ovs-vswitchd: 00035|netdev_vport|ERR|patch-tun: patch type requires valid 'peer' argument Nov 16 02:07:32 alice ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Interface patch-tun options:peer=patch-int Nov 16 02:07:32 alice ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 add-port br-tun patch-int Nov 16 02:07:32 alice ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Interface patch-int type=patch Nov 16 02:07:32 alice ovs-vswitchd: 00056|netdev_vport|ERR|patch-int: patch type requires valid 'peer' argument Nov 16 02:07:32 alice ovs-vsctl: 00001|vsctl|INFO|Called as /usr/bin/ovs-vsctl --timeout=2 set Interface patch-int options:peer=patch-tun

2012-11-16 01:03:49 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

This might be helpful, too (ntp is unrelated but immediately starts to listen on a new interface when it appears and stops to do so upon interface disappearence):

Nov 15 22:58:39 alice ntpd[3148]: Listen normally on 25 qbr54d531d7-f3 fe80::3871:9ff:fef7:16f7 UDP 123 Nov 15 22:58:39 alice ntpd[3148]: Listen normally on 26 qbrc004d01b-cf fe80::c072:a6ff:fea7:b450 UDP 123 Nov 15 22:58:39 alice ntpd[3148]: Listen normally on 27 qvbc004d01b-cf fe80::a008:7eff:fe7a:5e47 UDP 123 Nov 15 22:58:39 alice ntpd[3148]: Listen normally on 28 qvb54d531d7-f3 fe80::8c7d:9bff:fee9:6d87 UDP 123 Nov 15 22:58:39 alice ntpd[3148]: Listen normally on 29 qvo54d531d7-f3 fe80::e87f:8fff:feca:f76c UDP 123 Nov 15 22:58:39 alice ntpd[3148]: Listen normally on 30 qvoc004d01b-cf fe80::f428:36ff:fefc:8ac0 UDP 123 Nov 15 22:58:39 alice ntpd[3148]: peers refreshed Nov 15 22:58:39 alice ntpd[3148]: new interface(s) found: waking up resolver Nov 15 22:58:51 alice ntpd[3148]: Listen normally on 31 vnet1 fe80::fc16:3eff:fe8e:52dc UDP 123 Nov 15 22:58:51 alice ntpd[3148]: Listen normally on 32 vnet0 fe80::fc16:3eff:fec4:70a8 UDP 123 Nov 15 22:58:51 alice ntpd[3148]: peers refreshed Nov 15 22:58:51 alice ntpd[3148]: new interface(s) found: waking up resolver Nov 15 23:09:44 alice ntpd[3148]: Deleting interface #32 vnet0, fe80::fc16:3eff:fec4:70a8#123, interface stats: received=0, sent=0, dropped=0, active_time=653 secs Nov 15 23:09:44 alice ntpd[3148]: Deleting interface #31 vnet1, fe80::fc16:3eff:fe8e:52dc#123, interface stats: received=0, sent=0, dropped=0, active_time=653 secs Nov 15 23:09:44 alice ntpd[3148]: peers refreshed Nov 15 23:09:49 alice ntpd[3148]: Deleting interface #30 qvoc004d01b-cf, fe80::f428:36ff:fefc:8ac0#123, interface stats: received=0, sent=0, dropped=0, active_time=670 secs Nov 15 23:09:49 alice ntpd[3148]: Deleting interface #29 qvo54d531d7-f3, fe80::e87f:8fff:feca:f76c#123, interface stats: received=0, sent=0, dropped=0, active_time=670 secs Nov 15 23:09:49 alice ntpd[3148]: Deleting interface #28 qvb54d531d7-f3, fe80::8c7d:9bff:fee9:6d87#123, interface stats: received=0, sent=0, dropped=0, active_time=670 secs Nov 15 23:09:49 alice ntpd[3148]: Deleting interface #27 qvbc004d01b-cf, fe80::a008:7eff:fe7a:5e47#123, interface stats: received=0, sent=0, dropped=0, active_time=670 secs Nov 15 23:09:49 alice ntpd[3148]: Deleting interface #26 qbrc004d01b-cf, fe80::c072:a6ff:fea7:b450#123, interface stats: received=0, sent=0, dropped=0, active_time=670 secs Nov 15 23:09:49 alice ntpd[3148]: Deleting interface #25 qbr54d531d7-f3, fe80::3871:9ff:fef7:16f7#123, interface stats: received=0, sent=0, dropped=0, active_time=670 secs

2012-11-16 01:01:38 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

What I mean by "disappear" is that they simply are not present in "ip a" anymore, and it looks like openvswitch can't handle them properly:

root@alice:/var/run/openvswitch# grep "No such device" /var/log/openvswitch/ovs-vswitchd.log | wc -l 3420

A lot of entries like this one:

Nov 16 01:57:27|01154|netdev|WARN|failed to get flags for network device tap1935fdda-34: No such device

And numerous other device names.

Are you on Freenode by any chance? My nickname there is madkiss, so if you prefer to communicate that way, we could do that and I will post a summary in here if we manage to solve the problem.

2012-11-16 00:42:37 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

Dan, that is correct. posting before and after is somewhat difficult as the devices disappear the same second they are created.

2012-11-16 00:14:48 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

kvm Am 16.11.2012 01:11 schrieb "yong sheng gong" < question214317@answers.launchpad.net >:

Your question #214317 on quantum changed: https://answers.launchpad.net/quantum/+question/214317 (https://answers.launchpad.net/quantum...)

yong sheng gong requested more information: What hypervisor are u using? Xen?


To answer this request for more information, you can either reply to this email or enter your reply at the following page: https://answers.launchpad.net/quantum/+question/214317 (https://answers.launchpad.net/quantum...)

You received this question notification because you asked the question.

2012-11-16 00:14:32 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

This is a one node setup only. Am 16.11.2012 01:11 schrieb "yong sheng gong" < question214317@answers.launchpad.net >:

Your question #214317 on quantum changed: https://answers.launchpad.net/quantum/+question/214317 (https://answers.launchpad.net/quantum...)

Status: Needs information => Answered

yong sheng gong proposed the following answer: I think in #2: device 8 and 16 is in dhcp namespace, others are in router namespace.

in #1, all are in root namespace.

If we deploy in multiple nodes env: all interfaces in #1 should be on compute node all interfaces in #2 should be on network node


If this answers your question, please go to the following page to let us know that it is solved: https://answers.launchpad.net/quantum/+question/214317/+confirm?answer_id=3 (https://answers.launchpad.net/quantum...)

If you still need help, you can reply to this email or go to the following page to enter your feedback: https://answers.launchpad.net/quantum/+question/214317 (https://answers.launchpad.net/quantum...)

You received this question notification because you asked the question.

2012-11-15 22:13:01 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

Here is what "ip netns | xargs -i ip netns exec {} ip a" returns -- there, the devices do have their IPs ...

root@alice:/etc/init# ip netns | xargs -i ip netns exec {} ip a 8: tap1935fdda-34: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:46:72:69 brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/24 brd 10.0.0.255 scope global tap1935fdda-34 inet6 fe80::f816:3eff:fe46:7269/64 scope link valid_lft forever preferred_lft forever 16: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 15: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 47: qr-0dafbc88-a4: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:f8:97:36 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global qr-0dafbc88-a4 inet6 fe80::f816:3eff:fef8:9736/64 scope link valid_lft forever preferred_lft forever 48: qg-94813a40-e8: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether fa:16:3e:e6:cd:3d brd ff:ff:ff:ff:ff:ff inet 192.168.144.100/24 brd 192.168.144.255 scope global qg-94813a40-e8 inet6 fe80::f816:3eff:fee6:cd3d/64 scope link valid_lft forever preferred_lft forever

And what's also interesting is that for the ports of my br-int bridge, they're all down:

root@alice:/etc/init# sudo ovs-ofctl show br-int OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:0000966d76133843 n_tables:255, n_buffers:256 features: capabilities:0xc7, actions:0xfff 1(tap1935fdda-34): addr:88:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN 3(tap37ab6888-42): addr:76:de:d5:87:27:ce config: PORT_DOWN state: LINK_DOWN 14(patch-tun): addr:1e:41:08:d4:e5:ea config: 0 state: 0 15(qr-0dafbc88-a4): addr:89:00:00:00:00:00 config: PORT_DOWN state: LINK_DOWN LOCAL(br-int): addr:96:6d:76:13:38:43 config: PORT_DOWN state: LINK_DOWN OFPT_GET_CONFIG_REPLY (xid=0x3): frags=normal miss_send_len=0

2012-11-15 22:07:59 -0500 answered a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

Also interfaces apparently disappear silently, quantum shows a lot of messages like this one:

Nov 15 23:07:09|01029|netdev_linux|INFO|ioctl(SIOCGIFHWADDR) on tap1935fdda-34 device failed: No such device

2012-11-15 22:06:12 -0500 asked a question Quantum: When using OVS, virtual interfaces silently disappear or lose their IPs

Hello all,

I'm running an OpenStack setup here with Quantum+OVS. As I run all services on one node, I need use_namespaces=true in my configuration file. With that, I can set up Quantum & OpenvSwitch nicely, but as soon as I fire up a VM, all the interfaces created by Quantum and OpenVSwitch silently disappear:

35: qbr54d531d7-f3: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UP link/ether 8e:7d:9b:e9:6d:87 brd ff:ff:ff:ff:ff:ff inet6 fe80::3871:9ff:fef7:16f7/64 scope link valid_lft forever preferred_lft forever 36: qvo54d531d7-f3: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether ea:7f:8f:ca:f7:6c brd ff:ff:ff:ff:ff:ff inet6 fe80::e87f:8fff:feca:f76c/64 scope link valid_lft forever preferred_lft forever 37: qvb54d531d7-f3: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast master qbr54d531d7-f3 state UP qlen 1000 link/ether 8e:7d:9b:e9:6d:87 brd ff:ff:ff:ff:ff:ff inet6 fe80::8c7d:9bff:fee9:6d87/64 scope link valid_lft forever preferred_lft forever 38: qbrc004d01b-cf: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UP link/ether a2:08:7e:7a:5e:47 brd ff:ff:ff:ff:ff:ff inet6 fe80::c072:a6ff:fea7:b450/64 scope link valid_lft forever preferred_lft forever 39: qvoc004d01b-cf: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether f6:28:36:fc:8a:c0 brd ff:ff:ff:ff:ff:ff inet6 fe80::f428:36ff:fefc:8ac0/64 scope link valid_lft forever preferred_lft forever 40: qvbc004d01b-cf: <broadcast,multicast,promisc,up,lower_up> mtu 1500 qdisc pfifo_fast master qbrc004d01b-cf state UP qlen 1000 link/ether a2:08:7e:7a:5e:47 brd ff:ff:ff:ff:ff:ff inet6 fe80::a008:7eff:fe7a:5e47/64 scope link valid_lft forever preferred_lft forever 41: vnet0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast master qbr54d531d7-f3 state UNKNOWN qlen 500 link/ether fe:16:3e:c4:70:a8 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fec4:70a8/64 scope link valid_lft forever preferred_lft forever 42: vnet1: <broadcast,multicast,up,lower_up> mtu 1500 qdisc pfifo_fast master qbrc004d01b-cf state UNKNOWN qlen 500 link/ether fe:16:3e:8e:52:dc brd ff:ff:ff:ff:ff:ff inet6 fe80::fc16:3eff:fe8e:52dc/64 scope link valid_lft forever preferred_lft forever

With use_namespaces=False in my configuration, this doesn't happen.

My configuration mainly follows Emilien Macchi's howto; it's using GRE.

I hope anyone can help me on this one!

Best regards Martin G. Loschwitz