Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Neutron: VXLAN in Havana

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 page here: https://wiki.openstack.org/wiki/Neutron/ML2#Configure_devstack_for_ML2_with_Tunnel_Networks

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

click to hide/show revision 2
No.2 Revision

Neutron: VXLAN in Havana

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 page here: https://wiki.openstack.org/wiki/Neutron/ML2#Configure_devstack_for_ML2_with_Tunnel_Networkswiki

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

Neutron: VXLAN in Havana

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 wiki

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

ovs-vsctl from the controller:

5ee4d85c-f0c9-4ccc-be1a-a4ea685c1c8e
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qvoc7ea9e28-d8"
            tag: 1
            Interface "qvoc7ea9e28-d8"
        Port "qvoe6f6e5fa-31"
            tag: 1
            Interface "qvoe6f6e5fa-31"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo24371d66-7a"
            tag: 1
            Interface "qvo24371d66-7a"
        Port "qvoe9f82b47-b3"
            tag: 1
            Interface "qvoe9f82b47-b3"
        Port "tapbbdf3a9d-c9"
            tag: 1
            Interface "tapbbdf3a9d-c9"
                type: internal
    Bridge br-tun
        Port "vxlan-192.168.3.2"
            Interface "vxlan-192.168.3.2"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.3", out_key=flow, remote_ip="192.168.3.2"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "1.10.2"

Neutron: VXLAN in Havana

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 wiki

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

ovs-vsctl from the controller:

5ee4d85c-f0c9-4ccc-be1a-a4ea685c1c8e
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qvoc7ea9e28-d8"
            tag: 1
            Interface "qvoc7ea9e28-d8"
        Port "qvoe6f6e5fa-31"
            tag: 1
            Interface "qvoe6f6e5fa-31"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo24371d66-7a"
            tag: 1
            Interface "qvo24371d66-7a"
        Port "qvoe9f82b47-b3"
            tag: 1
            Interface "qvoe9f82b47-b3"
        Port "tapbbdf3a9d-c9"
            tag: 1
            Interface "tapbbdf3a9d-c9"
                type: internal
    Bridge br-tun
        Port "vxlan-192.168.3.2"
            Interface "vxlan-192.168.3.2"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.3", out_key=flow, remote_ip="192.168.3.2"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "1.10.2"

ovs-ofctl dump-flows on the controller:

sudo ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=60338.193s, table=0, n_packets=66303, n_bytes=13772460, idle_age=1, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=60207.727s, table=0, n_packets=0, n_bytes=0, idle_age=60207, priority=1,in_port=2 actions=resubmit(,3)
 cookie=0x0, duration=60338.014s, table=0, n_packets=3, n_bytes=230, idle_age=60330, priority=0 actions=drop
 cookie=0x0, duration=60337.65s, table=1, n_packets=66303, n_bytes=13772460, idle_age=1, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=60337.835s, table=1, n_packets=0, n_bytes=0, idle_age=60337, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=60337.471s, table=2, n_packets=0, n_bytes=0, idle_age=60337, priority=0 actions=drop
 cookie=0x0, duration=60334.937s, table=3, n_packets=0, n_bytes=0, idle_age=60334, priority=1,tun_id=0x3e9 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=60337.298s, table=3, n_packets=0, n_bytes=0, idle_age=60337, priority=0 actions=drop
 cookie=0x0, duration=60337.122s, table=10, n_packets=0, n_bytes=0, idle_age=60337, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=60336.919s, table=20, n_packets=0, n_bytes=0, idle_age=60336, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=60244.984s, table=21, n_packets=66163, n_bytes=13744651, idle_age=1, hard_age=60207, priority=1,dl_vlan=1 actions=strip_vlan,set_tunnel:0x3e9,output:2
 cookie=0x0, duration=60336.73s, table=21, n_packets=137, n_bytes=27563, idle_age=60189, priority=0 actions=drop

Neutron: VXLAN in Havana

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 wiki

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

ovs-vsctl from the controller:

5ee4d85c-f0c9-4ccc-be1a-a4ea685c1c8e
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qvoc7ea9e28-d8"
            tag: 1
            Interface "qvoc7ea9e28-d8"
        Port "qvoe6f6e5fa-31"
            tag: 1
            Interface "qvoe6f6e5fa-31"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo24371d66-7a"
            tag: 1
            Interface "qvo24371d66-7a"
        Port "qvoe9f82b47-b3"
            tag: 1
            Interface "qvoe9f82b47-b3"
        Port "tapbbdf3a9d-c9"
            tag: 1
            Interface "tapbbdf3a9d-c9"
                type: internal
    Bridge br-tun
        Port "vxlan-192.168.3.2"
            Interface "vxlan-192.168.3.2"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.3", out_key=flow, remote_ip="192.168.3.2"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "1.10.2"

ovs-ofctl dump-flows on the controller:

sudo ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=60338.193s, table=0, n_packets=66303, n_bytes=13772460, idle_age=1, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=60207.727s, table=0, n_packets=0, n_bytes=0, idle_age=60207, priority=1,in_port=2 actions=resubmit(,3)
 cookie=0x0, duration=60338.014s, table=0, n_packets=3, n_bytes=230, idle_age=60330, priority=0 actions=drop
 cookie=0x0, duration=60337.65s, table=1, n_packets=66303, n_bytes=13772460, idle_age=1, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=60337.835s, table=1, n_packets=0, n_bytes=0, idle_age=60337, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=60337.471s, table=2, n_packets=0, n_bytes=0, idle_age=60337, priority=0 actions=drop
 cookie=0x0, duration=60334.937s, table=3, n_packets=0, n_bytes=0, idle_age=60334, priority=1,tun_id=0x3e9 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=60337.298s, table=3, n_packets=0, n_bytes=0, idle_age=60337, priority=0 actions=drop
 cookie=0x0, duration=60337.122s, table=10, n_packets=0, n_bytes=0, idle_age=60337, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=60336.919s, table=20, n_packets=0, n_bytes=0, idle_age=60336, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=60244.984s, table=21, n_packets=66163, n_bytes=13744651, idle_age=1, hard_age=60207, priority=1,dl_vlan=1 actions=strip_vlan,set_tunnel:0x3e9,output:2
 cookie=0x0, duration=60336.73s, table=21, n_packets=137, n_bytes=27563, idle_age=60189, priority=0 actions=drop

Here's the output of the same command, except with GRE enabled:

sudo ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=675.153s, table=0, n_packets=848, n_bytes=133205, idle_age=1, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=203.759s, table=0, n_packets=632, n_bytes=88512, idle_age=1, priority=1,in_port=2 actions=resubmit(,2)
 cookie=0x0, duration=674.963s, table=0, n_packets=4, n_bytes=320, idle_age=667, priority=0 actions=drop
 cookie=0x0, duration=674.375s, table=1, n_packets=802, n_bytes=127673, idle_age=0, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=674.764s, table=1, n_packets=45, n_bytes=5454, idle_age=30, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=672.077s, table=2, n_packets=632, n_bytes=88512, idle_age=1, priority=1,tun_id=0x1 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=674.057s, table=2, n_packets=0, n_bytes=0, idle_age=674, priority=0 actions=drop
 cookie=0x0, duration=673.871s, table=3, n_packets=0, n_bytes=0, idle_age=673, priority=0 actions=drop
 cookie=0x0, duration=673.667s, table=10, n_packets=632, n_bytes=88512, idle_age=1, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=673.484s, table=20, n_packets=0, n_bytes=0, idle_age=673, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=158.159s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=55, hard_age=54, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:7c:52:8f actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=139.433s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=139, hard_age=1, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=de:de:bf:46:e9:08 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=150.77s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=48, hard_age=47, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:bb:f7:d6 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=153.789s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=153, hard_age=2, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=56:c7:d4:76:3f:b4 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=163.54s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=163, hard_age=0, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=6a:80:8a:d4:d3:88 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=134.198s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=30, hard_age=29, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:14:43:f9 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=203.575s, table=21, n_packets=637, n_bytes=93636, idle_age=0, priority=1,dl_vlan=1 actions=strip_vlan,set_tunnel:0x1,output:2
 cookie=0x0, duration=673.279s, table=21, n_packets=164, n_bytes=33947, idle_age=125, priority=0 actions=drop

Neutron: VXLAN in Havana

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 wiki

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

ovs-vsctl from the controller:

5ee4d85c-f0c9-4ccc-be1a-a4ea685c1c8e
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qvoc7ea9e28-d8"
            tag: 1
            Interface "qvoc7ea9e28-d8"
        Port "qvoe6f6e5fa-31"
            tag: 1
            Interface "qvoe6f6e5fa-31"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo24371d66-7a"
            tag: 1
            Interface "qvo24371d66-7a"
        Port "qvoe9f82b47-b3"
            tag: 1
            Interface "qvoe9f82b47-b3"
        Port "tapbbdf3a9d-c9"
            tag: 1
            Interface "tapbbdf3a9d-c9"
                type: internal
    Bridge br-tun
        Port "vxlan-192.168.3.2"
            Interface "vxlan-192.168.3.2"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.3", out_key=flow, remote_ip="192.168.3.2"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "1.10.2"

ovs-ofctl dump-flows on the controller:

sudo ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=60338.193s, duration=201.225s, table=0, n_packets=66303, n_bytes=13772460, idle_age=1, n_packets=238, n_bytes=46686, idle_age=0, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=60207.727s, duration=87.937s, table=0, n_packets=0, n_bytes=0, idle_age=60207, idle_age=87, priority=1,in_port=2 actions=resubmit(,3)
 cookie=0x0, duration=60338.014s, duration=200.686s, table=0, n_packets=3, n_bytes=230, idle_age=60330, idle_age=195, priority=0 actions=drop
 cookie=0x0, duration=60337.65s, duration=199.533s, table=1, n_packets=66303, n_bytes=13772460, idle_age=1, n_packets=238, n_bytes=46686, idle_age=0, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=60337.835s, duration=200.042s, table=1, n_packets=0, n_bytes=0, idle_age=60337, idle_age=200, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=60337.471s, duration=199.007s, table=2, n_packets=0, n_bytes=0, idle_age=60337, idle_age=199, priority=0 actions=drop
 cookie=0x0, duration=60334.937s, duration=193.509s, table=3, n_packets=0, n_bytes=0, idle_age=60334, idle_age=193, priority=1,tun_id=0x3e9 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=60337.298s, duration=198.61s, table=3, n_packets=0, n_bytes=0, idle_age=60337, idle_age=198, priority=0 actions=drop
 cookie=0x0, duration=60337.122s, duration=198.257s, table=10, n_packets=0, n_bytes=0, idle_age=60337, idle_age=198, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=60336.919s, duration=197.713s, table=20, n_packets=0, n_bytes=0, idle_age=60336, idle_age=197, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=60244.984s, duration=87.221s, table=21, n_packets=66163, n_bytes=13744651, idle_age=1, hard_age=60207, n_packets=86, n_bytes=15351, idle_age=0, priority=1,dl_vlan=1 actions=strip_vlan,set_tunnel:0x3e9,output:2
 cookie=0x0, duration=60336.73s, duration=197.233s, table=21, n_packets=137, n_bytes=27563, idle_age=60189, n_packets=149, n_bytes=31085, idle_age=13, priority=0 actions=drop

Here's the output of the same command, except with GRE enabled:

sudo ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=675.153s, table=0, n_packets=848, n_bytes=133205, idle_age=1, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=203.759s, table=0, n_packets=632, n_bytes=88512, idle_age=1, priority=1,in_port=2 actions=resubmit(,2)
 cookie=0x0, duration=674.963s, table=0, n_packets=4, n_bytes=320, idle_age=667, priority=0 actions=drop
 cookie=0x0, duration=674.375s, table=1, n_packets=802, n_bytes=127673, idle_age=0, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=674.764s, table=1, n_packets=45, n_bytes=5454, idle_age=30, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=672.077s, table=2, n_packets=632, n_bytes=88512, idle_age=1, priority=1,tun_id=0x1 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=674.057s, table=2, n_packets=0, n_bytes=0, idle_age=674, priority=0 actions=drop
 cookie=0x0, duration=673.871s, table=3, n_packets=0, n_bytes=0, idle_age=673, priority=0 actions=drop
 cookie=0x0, duration=673.667s, table=10, n_packets=632, n_bytes=88512, idle_age=1, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=673.484s, table=20, n_packets=0, n_bytes=0, idle_age=673, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=158.159s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=55, hard_age=54, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:7c:52:8f actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=139.433s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=139, hard_age=1, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=de:de:bf:46:e9:08 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=150.77s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=48, hard_age=47, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:bb:f7:d6 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=153.789s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=153, hard_age=2, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=56:c7:d4:76:3f:b4 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=163.54s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=163, hard_age=0, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=6a:80:8a:d4:d3:88 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=134.198s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=30, hard_age=29, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:14:43:f9 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=203.575s, table=21, n_packets=637, n_bytes=93636, idle_age=0, priority=1,dl_vlan=1 actions=strip_vlan,set_tunnel:0x1,output:2
 cookie=0x0, duration=673.279s, table=21, n_packets=164, n_bytes=33947, idle_age=125, priority=0 actions=drop

Neutron: VXLAN in Havana

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 wiki

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

ovs-vsctl from the controller:

5ee4d85c-f0c9-4ccc-be1a-a4ea685c1c8e
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qvoc7ea9e28-d8"
            tag: 1
            Interface "qvoc7ea9e28-d8"
        Port "qvoe6f6e5fa-31"
            tag: 1
            Interface "qvoe6f6e5fa-31"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo24371d66-7a"
            tag: 1
            Interface "qvo24371d66-7a"
        Port "qvoe9f82b47-b3"
            tag: 1
            Interface "qvoe9f82b47-b3"
        Port "tapbbdf3a9d-c9"
            tag: 1
            Interface "tapbbdf3a9d-c9"
                type: internal
    Bridge br-tun
        Port "vxlan-192.168.3.2"
            Interface "vxlan-192.168.3.2"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.3", out_key=flow, remote_ip="192.168.3.2"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "1.10.2"

ovs-ofctl dump-flows on the controller:

sudo ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=201.225s, table=0, n_packets=238, n_bytes=46686, idle_age=0, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=87.937s, table=0, n_packets=0, n_bytes=0, idle_age=87, priority=1,in_port=2 actions=resubmit(,3)
 cookie=0x0, duration=200.686s, table=0, n_packets=3, n_bytes=230, idle_age=195, priority=0 actions=drop
 cookie=0x0, duration=199.533s, table=1, n_packets=238, n_bytes=46686, idle_age=0, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=200.042s, table=1, n_packets=0, n_bytes=0, idle_age=200, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=199.007s, table=2, n_packets=0, n_bytes=0, idle_age=199, priority=0 actions=drop
 cookie=0x0, duration=193.509s, table=3, n_packets=0, n_bytes=0, idle_age=193, priority=1,tun_id=0x3e9 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=198.61s, table=3, n_packets=0, n_bytes=0, idle_age=198, priority=0 actions=drop
 cookie=0x0, duration=198.257s, table=10, n_packets=0, n_bytes=0, idle_age=198, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=197.713s, table=20, n_packets=0, n_bytes=0, idle_age=197, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=87.221s, table=21, n_packets=86, n_bytes=15351, idle_age=0, priority=1,dl_vlan=1 actions=strip_vlan,set_tunnel:0x3e9,output:2
 cookie=0x0, duration=197.233s, table=21, n_packets=149, n_bytes=31085, idle_age=13, priority=0 actions=drop

Here's the output of the same command, except with GRE enabled:

sudo ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=675.153s, table=0, n_packets=848, n_bytes=133205, idle_age=1, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=203.759s, table=0, n_packets=632, n_bytes=88512, idle_age=1, priority=1,in_port=2 actions=resubmit(,2)
 cookie=0x0, duration=674.963s, table=0, n_packets=4, n_bytes=320, idle_age=667, priority=0 actions=drop
 cookie=0x0, duration=674.375s, table=1, n_packets=802, n_bytes=127673, idle_age=0, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=674.764s, table=1, n_packets=45, n_bytes=5454, idle_age=30, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=672.077s, table=2, n_packets=632, n_bytes=88512, idle_age=1, priority=1,tun_id=0x1 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=674.057s, table=2, n_packets=0, n_bytes=0, idle_age=674, priority=0 actions=drop
 cookie=0x0, duration=673.871s, table=3, n_packets=0, n_bytes=0, idle_age=673, priority=0 actions=drop
 cookie=0x0, duration=673.667s, table=10, n_packets=632, n_bytes=88512, idle_age=1, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=673.484s, table=20, n_packets=0, n_bytes=0, idle_age=673, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=158.159s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=55, hard_age=54, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:7c:52:8f actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=139.433s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=139, hard_age=1, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=de:de:bf:46:e9:08 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=150.77s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=48, hard_age=47, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:bb:f7:d6 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=153.789s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=153, hard_age=2, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=56:c7:d4:76:3f:b4 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=163.54s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=163, hard_age=0, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=6a:80:8a:d4:d3:88 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=134.198s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=30, hard_age=29, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:14:43:f9 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=203.575s, table=21, n_packets=637, n_bytes=93636, idle_age=0, priority=1,dl_vlan=1 actions=strip_vlan,set_tunnel:0x1,output:2
 cookie=0x0, duration=673.279s, table=21, n_packets=164, n_bytes=33947, idle_age=125, priority=0 actions=drop

I'm also seeing the following:

14:31:30.284221 IP 192.168.3.3.57221 > 192.168.3.2.4789: UDP, length 115 14:31:30.284381 IP 192.168.3.2 > 192.168.3.3: ICMP 192.168.3.2 udp port 4789 unreachable, length 151

On the controller.

Neutron: VXLAN in Havana

Has anyone been able to successfully run either OVS or ML2 with VXLAN networking and can help me understand what pitfalls I might run into in doing so?

I have a stable/havana cluster that works great with GRE running in both OVS and ML2 (I've since switch to ML2, chasing issues in a deprecated plugin just seems like a fool's endeavour :-)

I'm devstack based and basically follow the directions on the ML2 wiki

I see the usual br-tun, br-int, and associated vxlan port on both controller and compute node, e.g.,

sudo ovs-vsctl show
a426636d-4a2d-4a25-bc5b-8945409e1cab
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port "vxlan-192.168.3.3"
            Interface "vxlan-192.168.3.3"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.2", out_key=flow, remote_ip="192.168.3.3"}
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        Port "qvob86969c3-c8"
            tag: 1
            Interface "qvob86969c3-c8"
        Port "qvo9da1c5b9-44"
            tag: 1
            Interface "qvo9da1c5b9-44"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvo9c11361a-7f"
            tag: 1
            Interface "qvo9c11361a-7f"
        Port "qvo483db35f-ca"
            tag: 1
            Interface "qvo483db35f-ca"
    ovs_version: "1.10.2"

This mirrors what I would see for GRE (which works) except of course the interface type on the tunnel port would be "gre". As you can see, I'm using OVS 1.10.2 which has VXLAN support (bigger picture is I am based on Ubuntu 13.10 saucy which supplies that version of OVS).

I can fire up VMs and see VMs on the compute node try to get dhcp via the console log and fail, and I also see vxlan encapsulated packets arriving on the controller from the compute node during the process. I thought maybe there might be an MTU issue after poking around on google for discussions of VXLAN and openvswitch/ml2/neutron, but inspecting interfaces (both ovs and physical) I see no indication of packet drops (at least, no reports if packet loss using ipconfig or ovs-dpctl) on either the compute or controller node (and I am guessing if MTU were an issue, I'd be seeing packet drops (right?)).

I'm still getting my feet wet with VXLAN and I'm also still learning about Linux networking, but have for the better part of the year had little problem with GRE in folsom, grizzly, and (now) havana, so thinking I'm just missing something that may be obvious to others regarding VXLAN - clearly just telling ML2 that I want to use VXLAN isn't sufficient.

ovs-vsctl from the controller:

5ee4d85c-f0c9-4ccc-be1a-a4ea685c1c8e
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "qvoc7ea9e28-d8"
            tag: 1
            Interface "qvoc7ea9e28-d8"
        Port "qvoe6f6e5fa-31"
            tag: 1
            Interface "qvoe6f6e5fa-31"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo24371d66-7a"
            tag: 1
            Interface "qvo24371d66-7a"
        Port "qvoe9f82b47-b3"
            tag: 1
            Interface "qvoe9f82b47-b3"
        Port "tapbbdf3a9d-c9"
            tag: 1
            Interface "tapbbdf3a9d-c9"
                type: internal
    Bridge br-tun
        Port "vxlan-192.168.3.2"
            Interface "vxlan-192.168.3.2"
                type: vxlan
                options: {in_key=flow, local_ip="192.168.3.3", out_key=flow, remote_ip="192.168.3.2"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "1.10.2"

ovs-ofctl dump-flows on the controller:

sudo ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=201.225s, table=0, n_packets=238, n_bytes=46686, idle_age=0, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=87.937s, table=0, n_packets=0, n_bytes=0, idle_age=87, priority=1,in_port=2 actions=resubmit(,3)
 cookie=0x0, duration=200.686s, table=0, n_packets=3, n_bytes=230, idle_age=195, priority=0 actions=drop
 cookie=0x0, duration=199.533s, table=1, n_packets=238, n_bytes=46686, idle_age=0, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=200.042s, table=1, n_packets=0, n_bytes=0, idle_age=200, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=199.007s, table=2, n_packets=0, n_bytes=0, idle_age=199, priority=0 actions=drop
 cookie=0x0, duration=193.509s, table=3, n_packets=0, n_bytes=0, idle_age=193, priority=1,tun_id=0x3e9 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=198.61s, table=3, n_packets=0, n_bytes=0, idle_age=198, priority=0 actions=drop
 cookie=0x0, duration=198.257s, table=10, n_packets=0, n_bytes=0, idle_age=198, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=197.713s, table=20, n_packets=0, n_bytes=0, idle_age=197, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=87.221s, table=21, n_packets=86, n_bytes=15351, idle_age=0, priority=1,dl_vlan=1 actions=strip_vlan,set_tunnel:0x3e9,output:2
 cookie=0x0, duration=197.233s, table=21, n_packets=149, n_bytes=31085, idle_age=13, priority=0 actions=drop

Here's the output of the same command, except with GRE enabled:

sudo ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=675.153s, table=0, n_packets=848, n_bytes=133205, idle_age=1, priority=1,in_port=1 actions=resubmit(,1)
 cookie=0x0, duration=203.759s, table=0, n_packets=632, n_bytes=88512, idle_age=1, priority=1,in_port=2 actions=resubmit(,2)
 cookie=0x0, duration=674.963s, table=0, n_packets=4, n_bytes=320, idle_age=667, priority=0 actions=drop
 cookie=0x0, duration=674.375s, table=1, n_packets=802, n_bytes=127673, idle_age=0, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,21)
 cookie=0x0, duration=674.764s, table=1, n_packets=45, n_bytes=5454, idle_age=30, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=672.077s, table=2, n_packets=632, n_bytes=88512, idle_age=1, priority=1,tun_id=0x1 actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=674.057s, table=2, n_packets=0, n_bytes=0, idle_age=674, priority=0 actions=drop
 cookie=0x0, duration=673.871s, table=3, n_packets=0, n_bytes=0, idle_age=673, priority=0 actions=drop
 cookie=0x0, duration=673.667s, table=10, n_packets=632, n_bytes=88512, idle_age=1, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:1
 cookie=0x0, duration=673.484s, table=20, n_packets=0, n_bytes=0, idle_age=673, priority=0 actions=resubmit(,21)
 cookie=0x0, duration=158.159s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=55, hard_age=54, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:7c:52:8f actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=139.433s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=139, hard_age=1, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=de:de:bf:46:e9:08 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=150.77s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=48, hard_age=47, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:bb:f7:d6 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=153.789s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=153, hard_age=2, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=56:c7:d4:76:3f:b4 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=163.54s, table=20, n_packets=0, n_bytes=0, hard_timeout=300, idle_age=163, hard_age=0, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=6a:80:8a:d4:d3:88 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=134.198s, table=20, n_packets=15, n_bytes=1818, hard_timeout=300, idle_age=30, hard_age=29, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:14:43:f9 actions=load:0->NXM_OF_VLAN_TCI[],load:0x1->NXM_NX_TUN_ID[],output:2
 cookie=0x0, duration=203.575s, table=21, n_packets=637, n_bytes=93636, idle_age=0, priority=1,dl_vlan=1 actions=strip_vlan,set_tunnel:0x1,output:2
 cookie=0x0, duration=673.279s, table=21, n_packets=164, n_bytes=33947, idle_age=125, priority=0 actions=drop

I'm also seeing the following:following in tcpdump on the controller sniffing the cluster net:

14:31:30.284221 IP 192.168.3.3.57221 > 192.168.3.2.4789: UDP, length 115
14:31:30.284381 IP 192.168.3.2 > 192.168.3.3: ICMP 192.168.3.2 udp port 4789 unreachable, length 151151

As far as netstat goes:

solmaz@controller:~$ sudo netstat -an | grep 4789
tcp        0      0 127.0.0.1:3306          127.0.0.1:47894         ESTABLISHED
tcp        0      0 127.0.0.1:3306          127.0.0.1:47898         ESTABLISHED
tcp        0      0 127.0.0.1:47894         127.0.0.1:3306          ESTABLISHED
tcp        0      0 127.0.0.1:47898         127.0.0.1:3306          ESTABLISHED
udp        0      0 0.0.0.0:4789            0.0.0.0:*

On the controller.