Ask Your Question

Can we create only 4094 networks using OVS?

asked 2014-08-14 03:53:35 -0500

Priya gravatar image

updated 2014-08-19 01:04:12 -0500

On creating a network each tap interfaces at the br-int are associated to each vlan on OVS. It indicates that there are one-to-one mappings between VLAN IDs and GRE/VXLAN tunnel IDs.

In detail, When i create a network, a tap interface is created in br-int with a VLAN associated to it in the neutron node. So if i create 100 networks, then 100 tap interfaces will be created with 100 vlan-id's associated with each other. If this is the way it works, then Can we create only 4096 networks,since on creating 4097th network, the VLAN may be exhausted?

Please reply what will happen specific to neutron node? because this wont be a issue at compute node.

edit retag flag offensive close merge delete


If tenant_network_types = vxlan ( or gre ) , then tap-interface created at br-int doesn't get VLAN ID which uniquely identifies your private subnet. There is no one-to-one mappings between VLAN IDs and GRE/VXLAN tunnel IDs.

dbaxps gravatar imagedbaxps ( 2014-11-26 02:58:27 -0500 )edit

VLAN is not unique across hosts,since vlan has limitation of 4096 you can have only that many networks in each node.You will hit other limitations before reaching 4096 networks.Also VLAN just need to be unique per hosts.Its not send through tunnels.I have a detailed answer but its awaiting approval

anantha gravatar imageanantha ( 2014-12-08 04:18:37 -0500 )edit

Why its not an issue at compute node??It will be an issue.As mentioned in a real production environment you will hit other limits before hitting VLAN limit of 4096

anantha gravatar imageanantha ( 2014-12-09 06:05:06 -0500 )edit

8 answers

Sort by » oldest newest most voted

answered 2014-11-25 17:50:46 -0500

Nitin Gupta gravatar image


Even I am thinking the same question you asked. But have not got the answer yet.. If you have an answer for this, then please update this blog post.

I have been searching for the answer from the last few days but could not get the satisfactory answer.

Thanks Nitin

edit flag offensive delete link more

answered 2014-08-16 13:18:36 -0500

dbaxps gravatar image

updated 2014-11-26 15:30:15 -0500

In case GRE or VXLAN networks 4096 is not a limit. Open vSwitch supports tunneling with the VXLAN and GRE encapsulation protocols.
View :-

In summary, with VXLAN:

   1.The 4094 vLAN limit for ensuring computing privacy is addressed with the VXLAN 24-bit VNI construct that enables 16  million isolated tenant networks. VMs need to be on the same VNI to communicate with each other. This delivers the  isolation demanded in a multi-tenant architecture keeping associated VMs within the same VNI. The VXLAN 24-bit segment ID in each frame differentiates individual logical networks so millions of isolated Layer 2 VXLAN networks can co-exist on a common Layer 3 infrastructure.
  2. A multi-tenant cloud infrastructure is now capable of delivering “elastic” capacity service by enabling additional application   VMs to be rapidly provisioned in a different L3 network which can communicate as if they are on a common L2 subnet.
  3.Overlay Networking overcomes the limits of STP and creates very large network domains where VMs can be moved   anywhere. This also enables IT teams to reduce over provisioning to a much lower percentage which can save a lot of   money. For example, deploying an extra server for 50 servers, over provisioning is reduced to two percent (from 10 percent  estimated before). As a result, data centers can save as much as eight percent of their entire IT infrastructure budget with  VXLAN Overlay Networking.
  4. Overlay Networking can make hybrid cloud deployments simpler to deploy because it leverages the ubiquity of IP for data   flows over the WAN.
  5.VMs are uniquely identified by a combination of their MAC addresses and VNI. Thus it is acceptable for VMs to have    duplicate MAC addresses, as long as they are in different tenant networks. This simplifies administration of multi-tenant   customer networks for the Cloud service provider.
  6 Finally, VXLAN is an evolutionary solution, already supported by switches and driven by software changes, not requiring  “forklift” hardware upgrades thus easing and hastening the adoption of the technology.

2.GRE. View

Network tunneling encapsulates each tenant/network combination with a unique "tunnel-id" that is used to identify the network traffic belonging to that combination. The tenant's L2 network connectivity is independent of physical locality or underlying network design. By encapsulating traffic inside IP packets, that traffic can cross Layer-3 boundaries, removing the need for preconfigured VLANs and VLAN trunking. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.

Also notice ( view ):-

In case of GRE(VXLAN) tenants L2 networks the VLAN tags you see in the output of "ovs-vsctl show" and in output of "ovs-ofctl dump-flows br-tun" (mod_vlan_vid) are only locally significant. This VLAN tags are not really L2 tags added to the frames leaving on the physical interface.They are only used by openvswitch to separate traffic ...
edit flag offensive delete link more


I am not sure if my question is answered. Let me make the question quite simple.

When i create a network, a tap interface is created in br-int with a VLAN associated to it in the neutron node. So if i create 100 networks, then 100 tap interfaces will be created with 100 vlan-id's associated with each other. If this is the way it works, then Can we create only 4096 networks,since on creating 4097th network, the VLAN may be exhausted?

Priya gravatar imagePriya ( 2014-08-18 03:57:40 -0500 )edit

GRE.Across compute nodes we use the GRE tunnel ID, each tenant network is provisioned both a GRE tunnel ID and a locally valuable VLAN tag

dbaxps gravatar imagedbaxps ( 2014-08-18 05:31:06 -0500 )edit

Priya, I think I got your point. It would imply that at ONE compute host, all VMs running on it can't be connected to more than ~4k networks. But in total (on all comp. hosts together) you can have more: the br-int tags are "local" to the compute host (but better if a more knowledgeble guy answers)

T u l gravatar imageT u l ( 2014-08-18 06:07:44 -0500 )edit

T u l, Thanks for your answer. I found that your answer was too specific to compute node. But i am in need of answer specific to neutron node.With a single controller(single database), multiple neutron and multiple compute nodes, the vlans associated could be easily exhausted. Since on each network being created, the TAP interfaces attached br-int gets associated with each VLAN. And hence we could have only ~4095 networks in a paticular region. If there are 5000 Tenants to this region, then the networks will have exhausted and ~895 Tenants wont have networks created to them.

Priya gravatar imagePriya ( 2014-08-18 07:50:58 -0500 )edit

Unfortunately the only thing I can (and will) do now is to vote up the question...

T u l gravatar imageT u l ( 2014-08-18 08:07:07 -0500 )edit

answered 2014-08-16 02:49:21 -0500

T u l gravatar image

I think this limitation is there if you use VLAN separation. If you use GRE or VXLAN it should not be the case anymore.

edit flag offensive delete link more

answered 2014-11-26 03:59:36 -0500

Priya gravatar image


I have still not got my point clarified.

In the neutron node, the table-2 flows of br-tun are as follows: cookie=0x0, duration=41203.999s, table=2, n_packets=96352, n_bytes=12771283, idle_age=0, priority=1,tun_id=0x1 actions=mod_vlan_vid:2,resubmit(,10) cookie=0x0, duration=41204.321s, table=2, n_packets=16933, n_bytes=3090137, idle_age=112, priority=1,tun_id=0x2 actions=mod_vlan_vid:3,resubmit(,10) cookie=0x0, duration=41205.045s, table=2, n_packets=4, n_bytes=468, idle_age=10406, priority=1,tun_id=0x3 actions=mod_vlan_vid:4,resubmit(,10)

Just please let me know what will be the actions=<val> if tun_id=0x1002?

Even in the site you have specified in states that there is one to one mapping of tun id and vlan id. The quotes under Segregation section are as follows

"Across compute nodes we use the GRE tunnel ID. As discussed previously, each tenant network is provisioned both a GRE tunnel ID and a locally significant VLAN tag. That means that incoming traffic with a GRE tunnel ID is converted to the correct local VLAN tag as can be seen in table 2. The message is then forwarded to br-int already VLAN tagged and the appropriate check can be made."

edit flag offensive delete link more


You believe , that mod_vlan_id is :(y+1) if tun_id=0x(y) per your report. Then see same report here :- . It looks to be random.

dbaxps gravatar imagedbaxps ( 2014-11-26 04:31:52 -0500 )edit

You expect me to answer actions=mod_vlan_vid:1003, but I am not sure at all, that on Fedora 20 OVS release mentioned behaviour tun_id=>mod_vlan_id will repeat yours.

dbaxps gravatar imagedbaxps ( 2014-11-26 04:35:54 -0500 )edit

I think you are aware that there is a Neutron node on openstack setup. If it is present on your Setup, i would like you to execute this command "ovs-ofctl dump-flows br-tun". You would see flows with "mod_vlan_id=<value>". What do you mean by this? What is the maximum value of mod_vlan_id?

Priya gravatar imagePriya ( 2014-11-26 05:15:01 -0500 )edit

If we have more than 4093 networks on one neutron node. How neutron works then?

Priya gravatar imagePriya ( 2014-11-26 05:56:12 -0500 )edit

With ML2&OVS&VXLAN setup on Network Node

you  may have 16 million isolated tenant networks It's already mentioned in my answer.
dbaxps gravatar imagedbaxps ( 2014-11-26 06:05:45 -0500 )edit

answered 2014-12-01 01:56:28 -0500

vthapar gravatar image
  1. Using VxLan/GRE you can have more than 4094 networks, but still limited by 4094 per node. This is because VNI or GRE Tunnel ID is internally converted to a local scope VLAN ID.
  2. This limit means on one compute node you cannot have more than 4094 unique networks.
  3. This also means you can't have more than 4094 unique networks on single Network Node [I believe this is what you call Neutron Node in question]
  4. But you CAN have multiple Compute and Network nodes.

Also, I am assuming Neutron Node you mean node running L3 Agent. Let me know if am wrong in that. Hope this answers your question.

edit flag offensive delete link more


dbaxps gravatar imagedbaxps ( 2014-12-01 02:03:56 -0500 )edit

I believe original question is not how VNI or GRE ID greater than 4094 will be supported [explained by scope local] , but how will one node support more than 4094 networks when each network maps to a VLANID. Local VLAN means a VNI 10001 can map to VID 1 on one node and VID 101 on another node.

vthapar gravatar imagevthapar ( 2014-12-01 03:49:54 -0500 )edit


Tag in  case of GRE (VXLAN) tenant's Layer 2 subnets  is not real VLAN ID ( 12 bits segment) extending Ethernet frame, it is just internal OVS numbers behave to split traffic on br-int.
dbaxps gravatar imagedbaxps ( 2014-12-01 04:12:29 -0500 )edit

Yes vthapar.. You had my question right. I saw your answers. i have a doubt based on your answer.

Priya gravatar imagePriya ( 2014-12-02 00:58:27 -0500 )edit

I have not worked with two or more network nodes(neutron).Based on openstack design,there will be a tunnel created on each network creation. If i have 2 neutrons and create a network, then wont there be tunnel created for both network nodes to the associated compute nodes with the same tun_id?

Priya gravatar imagePriya ( 2014-12-02 01:01:43 -0500 )edit

answered 2014-12-08 04:22:54 -0500

anantha gravatar image

You can not have more than 4096 VLAN on single node as explained by @vthapar.

Neutron uses OVS' VLAN and Tunnel to offer tenant isolation.

VLAN is layer 2 with a size of 12 bits hence 4096 which is not unique across hosts!!.

while its tunnel which is layer3(GRE,VXLAN) is 24 bits hence 2^24 bits( a large number) and is unique across hosts,so that you can have that many number of tenants.OVS then send l2 frames over l3.

Each tenant network on compute nodes will have a unique vlan tag,this will be different from what you see on network node for the same tenant.So with this if you create multiple network nodes ,vlan tag on one network node for the same tenant might be different on other network node it can even be same as its applicable only to that node.

In my environment my compute node uses VLAN 3 to isolate tenant A,its corresponding VLAN tag on network node 1 is VLAN 2 and on second network node also its VLAN 2.

Since These tags are stripped off before sending to physical network it doesnot matter which tag you use locally.Purpose is to isolate each tenants network created on nodes.

edit flag offensive delete link more

answered 2014-12-08 04:39:55 -0500

Prateek K gravatar image


From what i see is, you are trying to know why there internal tag's set in br-int for a partcular VLAN ?

If i am right then the answer to this is that whenever a VLAN network is created say having segmentation id 1002 then for that VLAN network a tag say 2 is created. Now the design is made in such the way that whenever the data flows from data network it will be tagged data say it will be data for a VM having tagged as 1002.

When it will come to br-int there are flow rules according to which it will remove the VLAN tag 1002 and send the untagged data to port which is internally tagged as 2.

I hope this might clear you confusion


edit flag offensive delete link more

answered 2014-12-08 04:40:38 -0500

kevin gravatar image

updated 2014-12-08 04:41:38 -0500

VLAN is not unique across hosts,only tunnel id needs to be unique.

You can have VLAN 2 on network node1 and network node 2 for the same network but not same tunnel id.

edit flag offensive delete link more


i guess he is correct

anantha gravatar imageanantha ( 2014-12-09 02:24:14 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2014-08-14 03:53:35 -0500

Seen: 2,893 times

Last updated: Dec 08 '14