# What is the difference between tenant_network_types and tunnel_type?

Initially I thought the ML2 tenant_network_type variable defines the type of internal segregation (VLAN, GRE, VXLAN) for networks belonging to different tenants using the same compute node (i.e., the network between the VMs and br-int). Similarly, I thought that tunnel_type specifies the type of tunnel (if any) between the compute node and the network node that carries traffic between VM instances running on different compute nodes. This would be the network between two br-int devices located on separate nodes, where the tunnel ID provides external segregation.

However, in most examples I've seen tenant_network_types and tunnel_type are set to the same value which leads me to believe I am perhaps misunderstanding how they should be used.

a) Can something else than VLAN be used for internal segregation on a compute node (on the integration bridge) and what are the configuration variables controlling that?

b) Can external segregation (i..e, tunnel type between compute node and network node) be defined independent of internal segregation and what are the configuration variables controlling that?

edit retag close merge delete

Sort by » oldest newest most voted

First I try to address b)

Can external segregation (i..e, tunnel type between compute node and network node) be defined independent of internal segregation and what are the configuration variables controlling that?


Per daragh-oreilly :-
If tenant_network_type=gre then when a tenant creates a network they will get a GRE backed network. But the provider/admin can still create VLAN backed networks by specifying the provider attributes when creating the network with the CLI.

neutron net-create net1 --tenant-id \$TENANT_ID --provider:network_type vlan --provider:physical_network vlan_net1 --provider:segmentation_id 100


The last command will succeed if in ml2_conf.ini type_drivers=gre,vlan
So ml2_conf.ini may contain

type_drivers = gre,vlan
tenants_network_type = gre,vlan
tunnel_type=gre


You stated

Initially I thought the ML2 tenant_network_type variable defines the type of internal segregation (VLAN, GRE, VXLAN) for networks belonging to different tenants using the same compute node (i.e., the network between the VMs and br-int)


I think that

Tenant_network_type defines actually type of communication between Compute and Network Nodes  for all tenants on the system.  Addressing b) is showing that even if  tenant_network_type= vxlan ( or gre ) in ml2_conf.ini.  Tenant's network of VLAN type might be created by admin.


Second a) by default the following is supposed to happen ( i am unaware of different schema )

As far as to my knowledge both VXLAN and GRE transmissions leaving Compute node get local vlan tag stripped and  tenants VXLAN(GRE) ID assigned. At arriving to Network Node VXLAN(GRE) ID will be stripped and local vlan tag   assigned and vice/versa. This vlan tags are only localy valuable to split traffics on local br-int. Globally over all landscape VXLAN (GRE) ID obtained by tenant  identifies his private subnet.

more

Is it fair to assume (based on your last comment) that a compute node cannot host more than 4096 networks? I mean, each compute node has a single integration bridge (br-int) and because vlan ids are used for segregation, we can have at most 4096 of them (or 4095 because of default VLAN ID)

( 2016-01-15 09:26:35 -0600 )edit

This question has a long story on this site:-

( 2016-01-15 10:05:34 -0600 )edit

Do I understand correctly that when GRE/VXLAN tenant networks are used the "local" VLAN IDs are not really VLAN IDs anymore? I mean that these IDs are not inserted into the Ethernet frames and they are larger than 12-bit VLAN IDs. OVS (br-int) ports with the same ID can exchange traffic

( 2016-01-15 10:39:35 -0600 )edit

"local" VLAN IDs are not really VLAN IDs anymore? - Yes, they never appear on physical wire.
But, I am not aware of theirs size. However , you might need enough Compute && Network Nodes on deployed landscape to accommodate 10.000 VXLAN IDs . Every tenant has it's own qdhcp-namespace.

( 2016-01-15 13:48:07 -0600 )edit

Then 1 vlan tag will be allocated. If it's size is still 12 bits you might need 3 network nodes.

( 2016-01-15 13:54:35 -0600 )edit