Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

There is a good explanation on this website. See Figure 7.2: image description

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin (see link). I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. Openflow rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network. Suppose that this is not scalable enough for your use case (VLAN IDs can reach a maximum of 4096 only). You would then re-configure ML2 plugin to rely on GRE and VXLAN with their respective forms of segmentation IDs. Internally, VLAN IDs are still used (for traffic between VMs and br-int), but instead of modifying between VLAN ID 1 and VLAN ID 101 (this is just one example), the modification would now be between VLAN ID 1 and a certain VXLAN Network Identifier (this is just one example too - you could use GRE as well).

Hope this explanation helps, and correct me if I have made any errors.

There is a good explanation on this website. website:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/section_networking-scenarios.html.

See Figure 7.2: image description

7.2:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/images/common/under-the-hood-scenario-1-ovs-compute.png

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin (see link). https://openstack.redhat.com/ML2_plugin). I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. Openflow rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network. Suppose that this is not scalable enough for your use case (VLAN IDs can reach a maximum of 4096 only). You would then re-configure ML2 plugin to rely on GRE and VXLAN with their respective forms of segmentation IDs. Internally, VLAN IDs are still used (for traffic between VMs and br-int), but instead of modifying between VLAN ID 1 and VLAN ID 101 (this is just one example), the modification would now be between VLAN ID 1 and a certain VXLAN Network Identifier (this is just one example too - you could use GRE as well).

Hope this explanation helps, and correct me if I have made any errors.

There is a good explanation on this website:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/section_networking-scenarios.html.

See Figure 7.2:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/images/common/under-the-hood-scenario-1-ovs-compute.png

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin (see https://openstack.redhat.com/ML2_plugin). I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. Openflow rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network. Suppose that this is not scalable enough for your use case (VLAN IDs can reach a maximum of 4096 only). You would then re-configure ML2 plugin to rely on GRE and VXLAN with their respective forms of segmentation IDs. Internally, VLAN IDs are still used (for traffic between VMs and br-int), but instead of modifying between VLAN ID 1 and VLAN ID 101 (this is just one example), the modification would now be between VLAN ID 1 and a certain VXLAN Network Identifier (this is just one example too - you could use GRE as well).

Hope this explanation helps, and correct me if I have made any errors.

There is a good explanation on this website:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/section_networking-scenarios.html.https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/section_networking-scenarios.html

See Figure 7.2:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/images/common/under-the-hood-scenario-1-ovs-compute.png

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin (see https://openstack.redhat.com/ML2_plugin). I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. Openflow rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network. Suppose that this is not scalable enough for your use case (VLAN IDs can reach a maximum of 4096 only). You would then re-configure ML2 plugin to rely on GRE and VXLAN with their respective forms of segmentation IDs. Internally, VLAN IDs are still used (for traffic between VMs and br-int), but instead of modifying between VLAN ID 1 and VLAN ID 101 (this is just one example), the modification would now be between VLAN ID 1 and a certain VXLAN Network Identifier (this is just one example too - you could use GRE as well).

Hope this explanation helps, and correct me if I have made any errors.

There is a good explanation on this website:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/section_networking-scenarios.html

See Figure 7.2:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/images/common/under-the-hood-scenario-1-ovs-compute.png

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin (see https://openstack.redhat.com/ML2_plugin). I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. Openflow rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network. Suppose that this is not scalable enough for your use case (VLAN IDs can reach a maximum of 4096 only). You would then re-configure ML2 plugin to rely on GRE and VXLAN with their respective forms of segmentation IDs. Internally, VLAN IDs are still used (for traffic between VMs and br-int), but instead of modifying between VLAN ID 1 and VLAN ID 101 (this is just one example), the modification would now be between VLAN ID 1 and a certain VXLAN Network Identifier (this is just one example too - you could use GRE as well).

Hope this explanation helps, and correct me if I have made any errors.

There is a good explanation on this website:website. See Figure 7.2.

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/section_networking-scenarios.html

See Figure 7.2:

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/4/html/Configuration_Reference_Guide/images/common/under-the-hood-scenario-1-ovs-compute.png

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin (see https://openstack.redhat.com/ML2_plugin). I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. Openflow rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network. Suppose that this is not scalable enough for your use case (VLAN IDs can reach a maximum of 4096 only). You would then re-configure ML2 plugin to rely on GRE and VXLAN with their respective forms of segmentation IDs. Internally, VLAN IDs are still used (for traffic between VMs and br-int), but instead of modifying between VLAN ID 1 and VLAN ID 101 (this is just one example), the modification would now be between VLAN ID 1 and a certain VXLAN Network Identifier (this is just one example too - you could use GRE as well).

Hope this explanation helps, and correct me if I have made any errors.

There is a good explanation on this website. See Figure 7.2.:

image description

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin (see https://openstack.redhat.com/ML2_plugin). I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. Openflow rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network. Suppose that this is not scalable enough for your use case (VLAN IDs can reach a maximum of 4096 only). You would then re-configure ML2 plugin to rely on GRE and VXLAN with their respective forms of segmentation IDs. Internally, VLAN IDs are still used (for traffic between VMs and br-int), but instead of modifying between VLAN ID 1 and VLAN ID 101 (this is just one example), the modification would now be between VLAN ID 1 and a certain VXLAN Network Identifier (this is just one example too - you could use GRE as well).

Hope this explanation helps, and correct me if I have made any errors.

There is a good explanation on this website. See Figure 7.2:

image description

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin (see https://openstack.redhat.com/ML2_plugin). plugin. I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. Openflow rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network. Suppose that this is not scalable enough for your use case (VLAN IDs can reach a maximum of 4096 only). You would then re-configure ML2 plugin to rely on GRE and VXLAN with their respective forms of segmentation IDs. Internally, VLAN IDs are still used (for traffic between VMs and br-int), but instead of modifying between VLAN ID 1 and VLAN ID 101 (this is just one example), the modification would now be between VLAN ID 1 and a certain VXLAN Network Identifier (this is just one example too - you could use GRE as well).

Hope this explanation helps, and correct me if I have made any errors.

There is a good explanation on this website. See Figure 7.2:

image description

We first assume that we are using Openvswitch. Internally within the compute node, tenant flows are separated by internally assigned VLAN IDs. This is configured by Neutron L2 Agent. On br-int, the traffic on ports that link up towards the VMs are tagged with VLAN IDs (this is configured by L2 Agent). For example (my example is slightly different from Figure 7.2), suppose VM1 and VM2 belong to Tenant A, and VM3 and VM4 belong to Tenant B. One possible configuration by L2 Agent is to assign VM1 and VM2 with VLAN ID 1 (for Tenant A), and assign VM3 and VM4 with VLAN ID 2 (for Tenant B). So traffic going into the VMs from br-int are assigned with those respective VLAN IDs, and traffic coming from the VMs to br-int are assumed to be tagged with those respective VLAN IDs. This is for internal traffic between VMs and br-int.

For traffic coming from and going out to the actual physical network where VM traffic flows (i.e. the actual physical switches that send traffic among compute nodes where the VMs are running), the Openstack administrator can specify the network_type used for this network (e.g. local, flat, VLAN, VXLAN, or GRE - there are 5 at the current Openstack Juno release). This can be configured in ML2 plugin. I assume that VLAN was used (Figure 7.2 also shows a VLAN example). Now in ML2 plugin, you can assign VLAN_ranges (e.g. 101-110) for tenant traffic on the actual physical network where VM traffic flows. Traffic flowing within the actual physical network will be tagged with these configured VLAN ranges (and NOT the internally assigned VLAN IDs as per the previous paragraph regarding traffic within the compute node). So for example, Tenant A could be assigned VLAN ID 101, while Tenant B could be assigned VLAN ID 102.

Now the most important part is the modification of VLAN IDs when traffic goes out (or into) the compute node. Openflow OVS rules are configured by L2 Agent for br-int (and br-eth1 as in Figure 7.2) to handle this modification. So, in our example, when traffic coming from Tenant A in VMs 1 and 2 arrive at br-int, they will have VLAN ID 1. But when it goes from br-int out into the physical network, the VLAN ID is modified from 1 to 101. And when Tenant A traffic is going into the compute node from the physical network, it first has VLAN ID 101, and after which, this VLAN ID is modified to 1 as it passes through br-int so as to correctly reach the VMs 1 and/or 2.

You generally would not want to touch the internally assigned VLAN IDs (L2 Agent automatically handles this). But if you want to specify settings, you would do it for the physical network. For this example, we used VLAN segmentation and VLAN ranges 101-110 on the physical network. Suppose that this is not scalable enough for your use case (VLAN IDs can reach a maximum of 4096 only). You would then re-configure ML2 plugin to rely on GRE and VXLAN with their respective forms of segmentation IDs. Internally, VLAN IDs are still used (for traffic between VMs and br-int), but instead of modifying between VLAN ID 1 and VLAN ID 101 (this is just one example), the modification would now be between VLAN ID 1 and a certain VXLAN Network Identifier (this is just one example too - you could use GRE as well).

Hope this explanation helps, and correct me if I have made any errors.