Neutron: br-tun not working on compute node
I have a basic setup of one controller node, one network node and one compute node, all on different physical machines. Spinning up machines works, but they don't get any IP, I've tracked it down to Neutron issues on the compute node. OpenStack has created a br-int interface and a br-tun interface and from the little I understand traffic should be going from br-int to br-tun and then to network node.
Using tcpdump and snooper interfaces I figured out I had traffic on br-int and the patch-interface on br-int, but not the patch-interface on br-tun or br-tun. So obviously this patch interface thing isn't working. I got rid of them and created a veth pair which I used instead. Now I have veth0 on br-int and veth1 on br-tun, and traffic from the VM is visible on br-int, veth0 and veth1, but still not on br-tun!
I've been trying to get OpenStack working for 3 months having this issue of Neutron not working on the compute node. So any help at all is much appreciated. Output of ovs-vsctl show:
[root@devstack-compute01 neutron]# ovs-vsctl show
40ab6bdc-ec3e-4599-b520-7a0fb6f2e3f7
Bridge br-tun
Port "vxlan-c0a86d61"
Interface "vxlan-c0a86d61"
type: vxlan
options: {df_default="true", in_key=flow, local_ip="192.168.109.99", out_key=flow, remote_ip="192.168.109.97"}
Port "veth1"
Interface "veth1"
Port br-tun
Interface br-tun
type: internal
Bridge br-int
fail_mode: secure
Port "qvoaaf249c0-f2"
tag: 1
Interface "qvoaaf249c0-f2"
Port "qvof8d4d7ed-27"
tag: 2
Interface "qvof8d4d7ed-27"
Port "veth0"
Interface "veth0"
Port "qvo23fec136-e9"
tag: 3
Interface "qvo23fec136-e9"
Port "qvo04a4d4da-46"
tag: 1
Interface "qvo04a4d4da-46"
Port br-int
Interface br-int
type: internal
ovs_version: "2.3.0"
/etc/neutron/plugins/ml2/ml2_conf.ini on controller node
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 65537:69999
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
tenant_network_type = vxlan
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 65537:69999
[agent]
tunnel_types = vxlan
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on network node:
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 65537:69999
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
tenant_network_type = vxlan
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 65537:69999
local_ip = 192.168.109.97
[agent]
tunnel_types = vxlan
/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on Compute node:
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 65537:69999
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
tenant_network_type = vxlan
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 65537:69999
local_ip = 192.168.109.99
[agent]
tunnel_types = vxlan
Output of ovs-ofctl show for br-tun:
[root@devstack-compute01 ~]# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x2): dpid:000022eb7930fa4a
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
2(vxlan-c0a86d61): addr:36:64:c7:cd:77:06
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
4(veth1): addr:1e:fe:44:a4:1f:76
config: 0
state: 0
current: 10GB-FD COPPER
speed: 10000 Mbps now, 0 Mbps max
LOCAL(br-tun): addr:22:eb:79:30 ...
Please , post /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on Network Node and Compute Nodes and /etc/neutron/plugins/ml2/ml2_conf.ini on Controller Node ( as UPDATE to your question)
Which OS are you running ? OVS version acceptable for RHEL/CentOS 7.1 ,F21,F22 is 2.3.1
Updated the question with details about those configuration files. I run CentOS 7 and according to yum my OVS version is openvswitch-2.3.0-1.x86_64.