Ask Your Question
0

Neutron: br-tun not working on compute node

asked 2015-06-02 11:28:53 -0600

johanhedberg gravatar image

updated 2015-06-02 15:16:25 -0600

I have a basic setup of one controller node, one network node and one compute node, all on different physical machines. Spinning up machines works, but they don't get any IP, I've tracked it down to Neutron issues on the compute node. OpenStack has created a br-int interface and a br-tun interface and from the little I understand traffic should be going from br-int to br-tun and then to network node.

Using tcpdump and snooper interfaces I figured out I had traffic on br-int and the patch-interface on br-int, but not the patch-interface on br-tun or br-tun. So obviously this patch interface thing isn't working. I got rid of them and created a veth pair which I used instead. Now I have veth0 on br-int and veth1 on br-tun, and traffic from the VM is visible on br-int, veth0 and veth1, but still not on br-tun!

I've been trying to get OpenStack working for 3 months having this issue of Neutron not working on the compute node. So any help at all is much appreciated. Output of ovs-vsctl show:

[root@devstack-compute01 neutron]# ovs-vsctl show
40ab6bdc-ec3e-4599-b520-7a0fb6f2e3f7
    Bridge br-tun
        Port "vxlan-c0a86d61"
            Interface "vxlan-c0a86d61"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.168.109.99", out_key=flow, remote_ip="192.168.109.97"}
        Port "veth1"
            Interface "veth1"
        Port br-tun
            Interface br-tun
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "qvoaaf249c0-f2"
            tag: 1
            Interface "qvoaaf249c0-f2"
        Port "qvof8d4d7ed-27"
            tag: 2
            Interface "qvof8d4d7ed-27"
        Port "veth0"
            Interface "veth0"
        Port "qvo23fec136-e9"
            tag: 3
            Interface "qvo23fec136-e9"
        Port "qvo04a4d4da-46"
            tag: 1
            Interface "qvo04a4d4da-46"
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.3.0"

/etc/neutron/plugins/ml2/ml2_conf.ini on controller node

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 65537:69999
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
tenant_network_type = vxlan
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 65537:69999
[agent]
tunnel_types = vxlan

/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on network node:

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 65537:69999
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
tenant_network_type = vxlan
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 65537:69999
local_ip = 192.168.109.97
[agent]
tunnel_types = vxlan

/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on Compute node:

[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges = 65537:69999
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
tenant_network_type = vxlan
enable_tunneling = True
tunnel_type = vxlan
tunnel_id_ranges = 65537:69999
local_ip = 192.168.109.99
[agent]
tunnel_types = vxlan

Output of ovs-ofctl show for br-tun:

[root@devstack-compute01 ~]# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x2): dpid:000022eb7930fa4a
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 2(vxlan-c0a86d61): addr:36:64:c7:cd:77:06
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 4(veth1): addr:1e:fe:44:a4:1f:76
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 LOCAL(br-tun): addr:22:eb:79:30 ...
(more)
edit retag flag offensive close merge delete

Comments

Please , post /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on Network Node and Compute Nodes and /etc/neutron/plugins/ml2/ml2_conf.ini on Controller Node ( as UPDATE to your question)

dbaxps gravatar imagedbaxps ( 2015-06-02 12:59:38 -0600 )edit

Which OS are you running ? OVS version acceptable for RHEL/CentOS 7.1 ,F21,F22 is 2.3.1

dbaxps gravatar imagedbaxps ( 2015-06-02 13:02:38 -0600 )edit

Updated the question with details about those configuration files. I run CentOS 7 and according to yum my OVS version is openvswitch-2.3.0-1.x86_64.

johanhedberg gravatar imagejohanhedberg ( 2015-06-02 14:40:18 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2015-06-02 13:39:02 -0600

dbaxps gravatar image

updated 2015-06-02 14:57:22 -0600

I would suggest you see https://bugzilla.redhat.com/show_bug....

My guess would be :-
yum -y update
in meantime would upgrade OVS to 2.3.1.

Correct versions of requested files you may view here http://bderzhavets.blogspot.com/2015/...

RDO Kilo Three Node deployment via packstack Fedora 22
Compute Node:-

[root@ip-192-169-142-137 ~(keystone_admin)]# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x2): dpid:000056aa9437754b
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 3(patch-int): addr:5a:e9:e6:c7:fe:63
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 4(vxlan-0a000093): addr:de:cc:cd:30:b6:26
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 LOCAL(br-tun): addr:56:aa:94:37:75:4b
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
[root@ip-192-169-142-137 ~(keystone_admin)]# ovs-ofctl dump-flows br-tun
NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=16465.917s, table=0, n_packets=0, n_bytes=0, idle_age=16465, priority=0 actions=drop
 cookie=0x0, duration=16465.990s, table=0, n_packets=45391, n_bytes=4555477, idle_age=189, priority=1,in_port=3 actions=resubmit(,2)
 cookie=0x0, duration=16244.804s, table=0, n_packets=178500, n_bytes=229047582, idle_age=189, priority=1,in_port=4 actions=resubmit(,4)
 cookie=0x0, duration=16465.852s, table=2, n_packets=45336, n_bytes=4551357, idle_age=189, priority=0,dl_dst=00:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,20)
 cookie=0x0, duration=16465.782s, table=2, n_packets=55, n_bytes=4120, idle_age=16028, priority=0,dl_dst=01:00:00:00:00:00/01:00:00:00:00:00 actions=resubmit(,22)
 cookie=0x0, duration=16465.692s, table=3, n_packets=0, n_bytes=0, idle_age=16465, priority=0 actions=drop
 cookie=0x0, duration=16465.600s, table=4, n_packets=0, n_bytes=0, idle_age=16465, priority=0 actions=drop
 cookie=0x0, duration=16246.988s, table=4, n_packets=178504, n_bytes=229047934, idle_age=189, priority=1,tun_id=0x3ea actions=mod_vlan_vid:1,resubmit(,10)
 cookie=0x0, duration=16465.429s, table=10, n_packets=178504, n_bytes=229047934, idle_age=189, priority=1 actions=learn(table=20,hard_timeout=300,priority=1,NXM_OF_VLAN_TCI[0..11],NXM_OF_ETH_DST[]=NXM_OF_ETH_SRC[],load:0->NXM_OF_VLAN_TCI[],load:NXM_NX_TUN_ID[]->NXM_NX_TUN_ID[],output:NXM_OF_IN_PORT[]),output:3
 cookie=0x0, duration=16465.350s, table=20, n_packets=7, n_bytes=564, idle_age=421, priority=0 actions=resubmit(,22)
 cookie=0x0, duration=421.505s, table=20, n_packets=115, n_bytes=19935, hard_timeout=300, idle_age=189, hard_age=189, priority=1,vlan_tci=0x0001/0x0fff,dl_dst=fa:16:3e:64:0e:c2 actions=load:0->NXM_OF_VLAN_TCI[],load:0x3ea->NXM_NX_TUN_ID[],output:4
 cookie=0x0, duration=16465.296s, table=22, n_packets=40, n_bytes=3180, idle_age=16289, priority=0 actions=drop
 cookie=0x0, duration=16247.055s, table=22, n_packets=22, n_bytes=1504, idle_age=421, hard_age=16244, dl_vlan=1 actions=strip_vlan,set_tunnel:0x3ea,output:4

Network Node

[root@ip-192-169-142-147 openvswitch(keystone_admin)]# ovs-ofctl show br-tun
OFPT_FEATURES_REPLY (xid=0x2): dpid:000042b34c39534d
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN ...
(more)
edit flag offensive delete link more

Comments

What am I supposed to look for here? I don't really understand the output of these commands?

johanhedberg gravatar imagejohanhedberg ( 2015-06-02 14:42:22 -0600 )edit

To understand how VXLAN(GRE) tunnel along with br-tun bridge may be troubleshooted ,please , view https://www.hastexo.com/system/files/...
Command ovs-ofctl dump-flows br-tun is actually standard OVS approach.

dbaxps gravatar imagedbaxps ( 2015-06-02 14:48:37 -0600 )edit

I don't know if I've understood it correctly, but the flows and vxlan/gre/vlan thingy is only used for getting the traffic from the compute node to the network node? My problem is that the traffic is dropped before it even comes to br-tun right?

johanhedberg gravatar imagejohanhedberg ( 2015-06-02 15:06:20 -0600 )edit
1

Added output of those commands to my question, I don't see anything wrong there, but I'm still unsure as to what they are supposed to look like?

johanhedberg gravatar imagejohanhedberg ( 2015-06-02 15:17:22 -0600 )edit

Please,

1. Upgrade OVS to 2.3.1
2. Fix ovs_neutron_plugin.ini  on Network && Compute

as shown in link in my answer, come back to normal OVS configuration on both nodes
restart neutron-openvswitch-agent on both nodes.

dbaxps gravatar imagedbaxps ( 2015-06-02 15:24:41 -0600 )edit
0

answered 2015-06-04 08:06:53 -0600

hello there,

i gone through with this issue, what worked for me is this.

as installation guide clearly tells we need to maintain two interface like eth0 & eth1.

eth0 for management network and eth1 for tunnel network. i hope you you did same thing. if i type

root@controller:/etc/modprobe.d# ovs-vsctl show
599ec640-7eda-4d4c-a387-75e97bf135c5
    Bridge br-int
        Port "qvo4ba01e0a-34"
            tag: 2
            Interface "qvo4ba01e0a-34"
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port "qvo21f7f45e-51"
            tag: 1
            Interface "qvo21f7f45e-51"
        Port br-int
            Interface br-int
                type: internal
        Port "qvo1c630b2f-c6"
            tag: 1
            Interface "qvo1c630b2f-c6"
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-14140003"
            Interface "gre-14140003"
                type: gre
                options: {in_key=flow, local_ip="20.20.0.2", out_key=flow, remote_ip="20.20.0.3"}
    ovs_version: "2.0.1"

clearly tunneling path is a pyhsical path using physical switch i used. so instead of veth pair use physical NIC for br-tun.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

Stats

Asked: 2015-06-02 11:28:53 -0600

Seen: 4,418 times

Last updated: Jun 04 '15