Does neutron (br-int) accepts nova instances sending vlan tagged traffic ?

asked 2015-01-17 19:08:38 -0600

Vivek gravatar image

My use case involves sending vlan tagged traffic between the instances on the same hypervisor and across hypervisor. How does neutron handles vlan tagged traffic from instances ? Do I need to have a special setting for this to work ?

edit retag flag offensive close merge delete

4 answers

Sort by » oldest newest most voted

answered 2015-01-19 20:39:08 -0600

l2edzl3oy gravatar image

updated 2015-01-20 03:36:34 -0600

There is a good explanation on this website. See Figure 7.2:

image description

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin. I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. OVS rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network ... (more)

edit flag offensive delete link more

answered 2015-01-17 22:03:58 -0600

SamYaple gravatar image

OVS is going to strip that VLAN tag before applying its own internal tag. Unfortunately, those OVS flow rules are hard coded in the neutron code.

edit flag offensive delete link more

answered 2015-01-20 00:21:07 -0600

dbaxps gravatar image

You may view also :
Section "About network isolation" contains also ovs-ofctl dump-flows reports.

In the example above we see that the namespace interface “tap26c9b807-7c” has vlan tag 1 assigned to it, if we examine OVS we see that it has flows which modify VLAN tag 1 to VLAN tag 1000 when a packet goes to the VM network on eth2 and vice versa. We can see this using the dump-flows command on OVS for packets going to the VM network we see the modification done on br-eth2:

#  ovs-ofctl dump-flows br-eth2

NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=18669.401s, table=0, n_packets=857, n_bytes=163350, idle_age=25, priority=4,in_port=2,dl_vlan=1 actions=mod_vlan_vid:1000,NORMAL
 cookie=0x0, duration=165108.226s, table=0, n_packets=14, n_bytes=1000, idle_age=5343, hard_age=65534, priority=2,in_port=2 actions=drop
 cookie=0x0, duration=165109.813s, table=0, n_packets=1671, n_bytes=213304, idle_age=25, hard_age=65534, priority=1 actions=NORMAL

For packets coming from the interface to the namespace we see the following modification:

#  ovs-ofctl dump-flows br-int

NXST_FLOW reply (xid=0x4):
 cookie=0x0, duration=18690.876s, table=0, n_packets=1610, n_bytes=210752, idle_age=1, priority=3,in_port=1,dl_vlan=1000 actions=mod_vlan_vid:1,NORMAL
 cookie=0x0, duration=165130.01s, table=0, n_packets=75, n_bytes=3686, idle_age=4212, hard_age=65534, priority=2,in_port=1 actions=drop
 cookie=0x0, duration=165131.96s, table=0, n_packets=863, n_bytes=160727, idle_age=1, hard_age=65534, priority=1 actions=NORMAL
edit flag offensive delete link more

answered 2015-01-18 15:30:57 -0600

darragh-oreilly gravatar image

Yeah ovs agent can't do it. I think the linuxbridge agent and ml2 mechanism driver with type flat should (maybe even type vlan if your switches support q-in-q).

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2015-01-17 19:08:38 -0600

Seen: 1,653 times

Last updated: Jan 20 '15