Ask Your Question
0

OpenVSwitch flows missing on newly created setup

asked 2012-12-06 18:21:09 -0500

martin-loschwitz gravatar image

I am seeing a strange effect here in an openstack setup. i have a working installation, where i can do a "ovs-ofctl dump-flows br-int" or "br-tun", and I am seeing a lot of defined flows.

On a diferent setup, where I realized that my computing nodes can't talk to the network node. I did see a lot of ovs-ofctl commands being executed by the quantum-plugin-openvswitch-agent on these nodes where it's working, and these are completely missing from the setup where stuff is broken.

I wonder when they are created by the quantum plugin openvswitch agent, and why they exist in the first setup, but not in the second (both are using exactly the same configuration, but the order that the setup was created was different).

So what's wrong here?

edit retag flag offensive close merge delete

8 answers

Sort by ยป oldest newest most voted
0

answered 2012-12-06 18:26:31 -0500

martin-loschwitz gravatar image

This is what I see in the working setup:

root@charlie:~# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6410.167s, table=0, n_packets=549, n_bytes=126580, priority=1 actions=NORMAL

root@charlie:~# ovs-ofctl dump-flows br-tun NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6401.687s, table=0, n_packets=113, n_bytes=36856, priority=3,tun_id=0x1,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=mod_vlan_vid:1,output:1 cookie=0x0, duration=6402.01s, table=0, n_packets=218, n_bytes=44126, priority=4,in_port=1,dl_vlan=1 actions=set_tunnel:0x1,NORMAL cookie=0x0, duration=6401.38s, table=0, n_packets=214, n_bytes=46174, priority=3,tun_id=0x1,dl_dst=fa:16:3e:26:d7:1d actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=6400.522s, table=0, n_packets=0, n_bytes=0, priority=3,tun_id=0x1,dl_dst=fa:16:3e:bf:11:9c actions=mod_vlan_vid:1,NORMAL cookie=0x0, duration=6408.66s, table=0, n_packets=4, n_bytes=308, priority=1 actions=drop

On the broken setup, all I see is

root@charlie:~# ovs-ofctl dump-flows br-int NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6410.167s, table=0, n_packets=549, n_bytes=126580, priority=1 actions=NORMAL

root@charlie:~# ovs-ofctl dump-flows br-tun NXST_FLOW reply (xid=0x4): cookie=0x0, duration=6408.66s, table=0, n_packets=4, n_bytes=308, priority=1 actions=drop

edit flag offensive delete link more
0

answered 2012-12-06 21:14:48 -0500

martin-loschwitz gravatar image

Is this possibly caused by the fact that I did not specify "--provider:network_type gre --provider:segmentation_id <tunnel-id>" when creating the internal network?

edit flag offensive delete link more
0

answered 2012-12-06 21:44:16 -0500

danwent gravatar image

no, that's unlikely to be the cause.

are you sure the quantum-openvswitch-agents are running in the second setup? are you sure there are VMs up on running on the hosts? can you run "ovs-vsctl show" on the not-working setup.

edit flag offensive delete link more
0

answered 2012-12-06 21:48:45 -0500

martin-loschwitz gravatar image

Yes, I am pretty sure that the quantum-openvswitch-agents are running in the second setup; I have restarted them on both hosts numerous times.

I have also compared "ovs-vsctl show" on both setups (the working and the broken one), and they both looked identical -- this is from the compute node:

Bridge br-int
    Port "vnet0"
        tag: 1
        Interface "vnet0"
    Port br-int
        Interface br-int
            type: internal
    Port "vnet1"
        tag: 2
        Interface "vnet1"
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
Bridge br-tun
    Port "gre-3"
        Interface "gre-3"
            type: gre
            options: {in_key=flow, out_key=flow, remote_ip="192.168.133.113"}
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
    Port br-tun
        Interface br-tun
            type: internal
ovs_version: "1.4.0+build0"

and this is from the computing node:

Bridge br-int
    Port "tap9885f7e6-c8"
        tag: 1
        Interface "tap9885f7e6-c8"
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port "qr-5dd26f35-67"
        tag: 1
        Interface "qr-5dd26f35-67"
            type: internal
    Port br-int
        Interface br-int
            type: internal
Bridge br-tun
    Port "gre-2"
        Interface "gre-2"
            type: gre
            options: {in_key=flow, out_key=flow, remote_ip="192.168.133.112"}
    Port br-tun
        Interface br-tun
            type: internal
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun}
Bridge br-ex
    Port br-ex
        Interface br-ex
            type: internal
    Port "qg-6bcaf016-b1"
        Interface "qg-6bcaf016-b1"
            type: internal
    Port "eth2"
        Interface "eth2"
ovs_version: "1.4.0+build0"

Where would the l2 agent get the information on how to create/modify the flows?

edit flag offensive delete link more
0

answered 2012-12-06 22:49:35 -0500

martin-loschwitz gravatar image

I used a modified version of Emilien Macchi's script, and as far as I can tell, the network in it gets created with

quantum net-create --tenant_id $tenant_id some-net

So in fact, i would expect it to be a local network. I will report back as soon as I could get my hands back on the servers (tomorrow) and was able to figure out whether this actually fixed the problem.

edit flag offensive delete link more
0

answered 2012-12-06 22:32:05 -0500

rkukura gravatar image

Try running "network show <net>" with admin credentials (OS_USERNAME=admin works in devstack). If the provider:network_type is local, that would explain the lack of flows. Local networks are handled entirely within br-int, and don't require any flow rules. If so, and these are tenant rather than provider networks, you need to set tenant_network_type to something other than the default of local.

-Bob

edit flag offensive delete link more
0

answered 2012-12-06 22:42:43 -0500

danwent gravatar image

ah, my initial response was assuming you had followed the directions at: http://docs.openstack.org/trunk/openstack-network/admin/content/openvswitch_plugin.html (http://docs.openstack.org/trunk/opens...) if you were using tunneling, as these networks would automatically be created with type GRE, rather than local.

edit flag offensive delete link more
0

answered 2012-12-07 07:54:06 -0500

martin-loschwitz gravatar image

I can confirm that adding a network-type to the command line when creating the networks worked. I will convert this into a bug against the OpenStack documentation.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2012-12-06 18:21:09 -0500

Seen: 516 times

Last updated: Dec 07 '12