Ask Your Question

Revision history [back]

In case GRE or VXLAN networks 4096 is not a limit.
View https://ask.openstack.org/en/question/31760/in-gre-why-the-vlan-tag-is-stripped-out-before-sending-it-in-tunnel-rather-than-sending-the-vlan-tagged-packet/#31766

In neutron if we are using ‘ML2’ plugin with ‘GRE’ type driver, then normally in ‘br-tun’ the tenant specific VLAN tag is stripped out and corresponding GRE key/ID is applied before sending data towards tunnel.

In case GRE or VXLAN networks 4096 is not a limit.
View https://ask.openstack.org/en/question/31760/in-gre-why-the-vlan-tag-is-stripped-out-before-sending-it-in-tunnel-rather-than-sending-the-vlan-tagged-packet/#31766

In neutron if we are using ‘ML2’ plugin with ‘GRE’ type driver, then normally in ‘br-tun’ the tenant specific VLAN tag is stripped out and corresponding GRE key/ID is applied before sending data towards tunnel.
tunnel.  There is different mapping within GRE key and VLAN ID in different OpenStack Node.

In case GRE or VXLAN networks 4096 is not a limit.
View https://ask.openstack.org/en/question/31760/in-gre-why-the-vlan-tag-is-stripped-out-before-sending-it-in-tunnel-rather-than-sending-the-vlan-tagged-packet/#31766https://ask.openstack.org/en/question/31760/in-gre-why-the-vlan-tag-is-stripped-out-before-sending-it-in-tunnel-rather-than-sending-the-vlan-tagged-packet/#31766 and also http://whatis.techtarget.com/definition/VXLAN

In neutron if we are using ‘ML2’ plugin with ‘GRE’ type driver, then normally in ‘br-tun’ the tenant specific VLAN tag is stripped out and corresponding GRE key/ID is applied before sending data towards tunnel.  There is different mapping within GRE key and VLAN ID in different OpenStack Node.

In case GRE or VXLAN networks 4096 is not a limit.
View https://ask.openstack.org/en/question/31760/in-gre-why-the-vlan-tag-is-stripped-out-before-sending-it-in-tunnel-rather-than-sending-the-vlan-tagged-packet/#31766 and also http://whatis.techtarget.com/definition/VXLAN

In neutron if we are using ‘ML2’ plugin with ‘GRE’ type driver, then normally in ‘br-tun’ the tenant specific VLAN tag is stripped out and corresponding GRE key/ID is applied before sending data towards tunnel.  There is different mapping within GRE key and VLAN ID in different OpenStack Node.

1.VXLAN. The primary goal of VXLAN is to extend the virtual LAN (VLAN) address space by adding a 24-bit segment ID and increasing the number of available IDs to 16 million.

2.GRE.Across compute nodes we use the GRE tunnel ID, each tenant network is provisioned both a GRE tunnel ID and a locally significant VLAN tag. That means that incoming traffic with a GRE tunnel ID is converted to the correct local VLAN tag . The message is then forwarded to br-int already VLAN tagged and the appropriate check can be made. So the Total = Number_GRI_ID * number_VLAN-id (say 4096). Even though your assumption is correct, number of tenant's networks is greater then 4096.

In case GRE or VXLAN networks 4096 is not a limit.
View https://ask.openstack.org/en/question/31760/in-gre-why-the-vlan-tag-is-stripped-out-before-sending-it-in-tunnel-rather-than-sending-the-vlan-tagged-packet/#31766 and also http://whatis.techtarget.com/definition/VXLAN
In summary, with VXLAN:

In neutron 1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16
million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the
isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI.

2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application
VMs to be rapidly provisioned in a different L3 network which can communicate as if we they are using ‘ML2’ plugin with ‘GRE’ type driver, then normally in ‘br-tun’ the tenant specific VLAN tag is stripped out and corresponding GRE key/ID is applied before sending on a common L2 subnet.

3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved
anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of
money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent
estimated before). As a result, data towards tunnel.  There is different mapping within GRE key and VLAN ID centers can save as much as eight percent of their entire IT infrastructure budget with
VXLAN Overlay Networking.

4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data
flows over the WAN.

5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have
duplicate MAC addresses, as long as they are in different OpenStack Node.
tenant networks. This simplifies administration of multi-tenant
customer networks for the Cloud service provider.

6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring
“forklift” hardware upgrades thus easing and hastening the adoption of the technology.

1.VXLAN. 2.GRE. View http://docs.openstack.org/security-guide/content/ch032_networking-best-practices.html

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The primary goal of VXLAN is to extend the virtual LAN (VLAN) address space by adding a 24-bit segment ID and increasing the number of available IDs to 16 million. 

2.GRE.Across compute nodes we use the GRE tunnel ID, each tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant network is provisioned both a GRE tunnel ID and a locally significant VLAN tag. That means that incoming traffic with a GRE tunnel ID is converted to the correct local VLAN tag . The message is then forwarded to br-int already VLAN tagged and the appropriate check can be made. So the Total = Number_GRI_ID * number_VLAN-id (say 4096). Even though your assumption is correct, number of tenant's networks is greater then 4096.

from a monitoring point of view.

In case GRE or VXLAN networks 4096 is not a limit.
View https://ask.openstack.org/en/question/31760/in-gre-why-the-vlan-tag-is-stripped-out-before-sending-it-in-tunnel-rather-than-sending-the-vlan-tagged-packet/#31766 and also http://whatis.techtarget.com/definition/VXLAN
In summary, with VXLAN:

 1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16
16   million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the
the  isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI.
  2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application
application   VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
  3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved
moved   anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of
of   money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent
percent  estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with
with  VXLAN Overlay Networking.
  4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data
data   flows over the WAN.
  5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have
have    duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant
multi-tenant   customer networks for the Cloud service provider.
  6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring
requiring  “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

2.GRE. View http://docs.openstack.org/security-guide/content/ch032_networking-best-practices.html

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

In case GRE or VXLAN networks 4096 is not a limit.
View https://ask.openstack.org/en/question/31760/in-gre-why-the-vlan-tag-is-stripped-out-before-sending-it-in-tunnel-rather-than-sending-the-vlan-tagged-packet/#31766 and also :-
1. http://assafmuller.com/2013/10/14/gre-tunnels-in-openstack-neutron/
2.
http://whatis.techtarget.com/definition/VXLAN

In summary, with VXLAN:

    1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16   million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the  isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI.
    2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application   VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
   3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved   anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of   money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent  estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with  VXLAN Overlay Networking.
   4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data   flows over the WAN.
  5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have    duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant   customer networks for the Cloud service provider.
  6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring  “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

2.GRE. View http://docs.openstack.org/security-guide/content/ch032_networking-best-practices.html

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

In case GRE or VXLAN networks 4096 is not a limit. Open vSwitch supports tunneling with the VXLAN and GRE encapsulation protocols.
View :-
1. http://assafmuller.com/2013/10/14/gre-tunnels-in-openstack-neutron/
2. http://whatis.techtarget.com/definition/VXLAN

In summary, with VXLAN:

    1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16   million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the  isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI.
    2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application   VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
   3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved   anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of   money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent  estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with  VXLAN Overlay Networking.
   4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data   flows over the WAN.
  5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have    duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant   customer networks for the Cloud service provider.
  6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring  “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

2.GRE. View http://docs.openstack.org/security-guide/content/ch032_networking-best-practices.html

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

In case GRE or VXLAN networks 4096 is not a limit. Open vSwitch supports tunneling with the VXLAN and GRE encapsulation protocols.
View :-
1. http://assafmuller.com/2013/10/14/gre-tunnels-in-openstack-neutron/
2. http://whatis.techtarget.com/definition/VXLAN

In summary, with VXLAN:

     1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16   million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the  isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI.
VNI. The VXLAN 24-bit segment ID in each frame differentiates individual logical networks so millions of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure.
        2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application   VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
    3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved   anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of   money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent  estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with  VXLAN Overlay Networking.
    4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data   flows over the WAN.
   5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have    duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant   customer networks for the Cloud service provider.
   6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring  “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

2.GRE. View http://docs.openstack.org/security-guide/content/ch032_networking-best-practices.html

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

In case GRE or VXLAN networks 4096 is not a limit. Open vSwitch supports tunneling with the VXLAN and GRE encapsulation protocols.
View :-
1. http://assafmuller.com/2013/10/14/gre-tunnels-in-openstack-neutron/
2. http://whatis.techtarget.com/definition/VXLAN

In summary, with VXLAN:

    1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16   million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the  isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI. The VXLAN 24-bit segment ID in each frame differentiates individual logical networks so millions of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure.
   2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application   VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
   3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved   anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of   money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent  estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with  VXLAN Overlay Networking.
   4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data   flows over the WAN.
   5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have    duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant   customer networks for the Cloud service provider.
   6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring  “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

2.GRE. View http://docs.openstack.org/security-guide/content/ch032_networking-best-practices.html

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

In case GRE or VXLAN networks 4096 is not a limit. Open vSwitch supports tunneling with the VXLAN and GRE encapsulation protocols.
View :-
1. http://assafmuller.com/2013/10/14/gre-tunnels-in-openstack-neutron/
2. http://whatis.techtarget.com/definition/VXLAN

In summary, with VXLAN:

   1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16  million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the  isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI. The VXLAN 24-bit segment ID in each frame differentiates individual logical networks so millions of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure.
  2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application   VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
  3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved   anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of   money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent  estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with  VXLAN Overlay Networking.
  4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data   flows over the WAN.
  5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have    duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant   customer networks for the Cloud service provider.
  6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring  “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

2.GRE. View http://docs.openstack.org/security-guide/content/ch032_networking-best-practices.html

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

Also notice :-

In case of GRE(VXLAN) tenants L2 networks the VLAN tags you see in the output of "ovs-vsctl show" and in output of "ovs-ofctl dump-flows br-tun" (mod_vlan_vid) are only locally significant. The first ones are not really L2 tags added to the frames leaving on the physical interface.They are only used by openvswitch to separate traffic on the br-int, so the different  tap interfaces corresponding to different neutron subnets do not see each other's traffic.

In case GRE or VXLAN networks 4096 is not a limit. Open vSwitch supports tunneling with the VXLAN and GRE encapsulation protocols.
View :-
1. http://assafmuller.com/2013/10/14/gre-tunnels-in-openstack-neutron/
2. http://whatis.techtarget.com/definition/VXLAN

In summary, with VXLAN:

   1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16  million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the  isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI. The VXLAN 24-bit segment ID in each frame differentiates individual logical networks so millions of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure.
  2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application   VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
  3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved   anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of   money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent  estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with  VXLAN Overlay Networking.
  4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data   flows over the WAN.
  5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have    duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant   customer networks for the Cloud service provider.
  6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring  “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

2.GRE. View http://docs.openstack.org/security-guide/content/ch032_networking-best-practices.html

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

Also notice :-( http://bderzhavets.blogspot.com/2014/11/ovs-ofctl-dump-flows-br-tun-vxlan.html ):-

In case of GRE(VXLAN) tenants L2 networks the VLAN tags you see in the output of "ovs-vsctl show" and in output of "ovs-ofctl dump-flows br-tun" (mod_vlan_vid) are only locally significant. The first ones are not really L2 tags added to the frames leaving on the physical interface.They are only used by openvswitch to separate traffic on the br-int, so the different  tap interfaces corresponding to different neutron subnets do not see each other's traffic.

In case GRE or VXLAN networks 4096 is not a limit. Open vSwitch supports tunneling with the VXLAN and GRE encapsulation protocols.
View :-
1. http://assafmuller.com/2013/10/14/gre-tunnels-in-openstack-neutron/
2. http://whatis.techtarget.com/definition/VXLAN

In summary, with VXLAN:

   1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16  million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the  isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI. The VXLAN 24-bit segment ID in each frame differentiates individual logical networks so millions of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure.
  2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application   VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
  3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved   anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of   money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent  estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with  VXLAN Overlay Networking.
  4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data   flows over the WAN.
  5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have    duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant   customer networks for the Cloud service provider.
  6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring  “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

2.GRE. View http://docs.openstack.org/security-guide/content/ch032_networking-best-practices.html

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

Also notice ( view http://bderzhavets.blogspot.com/2014/11/ovs-ofctl-dump-flows-br-tun-vxlan.html ):-

In case of GRE(VXLAN) tenants L2 networks the VLAN tags you see in the output of "ovs-vsctl show" and in output of "ovs-ofctl dump-flows br-tun" (mod_vlan_vid) are only locally significant. The first ones This VLAN tags are not really L2 tags added to the frames leaving on the physical interface.They are only used by openvswitch to separate traffic on the br-int, so the different  tap interfaces corresponding to different neutron subnets do not see each other's traffic.