Ask Your Question

Revision history [back]

I Installed OVS plugin with VXLAN and GRE. Both work similar even switching beetween GRE and VXLAN is only done in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. Important is part [agent], because it missing in openstack documentation.

[ovs]

tunnel_type = vxlan // or gre

tenant_network_type = vxlan // or gre

tunnel_id_ranges = 1:1000 enable_tunneling = True

integration_bridge = br-int tunnel_bridge = br-tun local_ip = 10.0.13.18

[agent]

tunnel_types = gre //or vxlan

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

My question is how to setup same configuration with ML2 Plugin. Look at my question: https://ask.openstack.org/en/question/6695/ml2-neutron-plugin-installation-and-configuration/

I Installed OVS plugin with VXLAN and GRE. Both work similar even switching beetween GRE and VXLAN is only done in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. Important is part [agent], because it missing in openstack documentation.

[ovs]   

[ovs]

tunnel_type = vxlan // or gre

gre tenant_network_type = vxlan // or gre

gre

tunnel_id_ranges = 1:1000 enable_tunneling = True

True

integration_bridge = br-int tunnel_bridge = br-tun local_ip = 10.0.13.18

10.0.13.18

[agent]

[agent] tunnel_types = gre //or vxlan

vxlan [securitygroup]

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

My question is how to setup same configuration with ML2 Plugin. Look at my question: https://ask.openstack.org/en/question/6695/ml2-neutron-plugin-installation-and-configuration/

I Installed OVS plugin with VXLAN and GRE. Both work similar even switching beetween GRE and VXLAN is only done in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. Important is part [agent], because it missing in openstack documentation.

[ovs] 

tunnel_type = vxlan // or gre
tenant_network_type = vxlan // or gre

tunnel_id_ranges = 1:1000
enable_tunneling = True
 

today we succesfully run the ML2 plugin with VXLAN on Ubuntu. There is no additional package for installation, because it is integrated inside of neutron-server. We have 1CTL, 1NTW, 2 CP nodes and for ML2 plugin must be configured only CTL node, where is neutron-server installed.

1) Edit /etc/default/neutron-server

# defaults for neutron-server
# path to config file corresponding to the core_plugin specified in
# neutron.conf
#NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini"

2) Setup service_plugins inside of /etc/neutron/neutron.conf

# Advanced service modules
# service_plugins =
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

3) Create file /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# type_drivers = local,flat,vlan,gre,vxlan
# Example: type_drivers = flat,vlan,gre,vxlan

type_drivers = vxlan,local

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan
tenant_network_types = vxlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
mechanism_drivers = openvswitch

[ml2_type_flat]
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# tunnel_id_ranges =
#tunnel_id_ranges = 1:1000

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
# vni_ranges =
#vni_ranges=400:500

vni_ranges=1:1000

# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

After this settings everything should work!

integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.13.18

[agent]
tunnel_types = gre //or vxlan

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

My question is how to setup same configuration with ML2 Plugin. Look at my question: https://ask.openstack.org/en/question/6695/ml2-neutron-plugin-installation-and-configuration/

I Installed OVS plugin with VXLAN and GRE. Both work similar even switching beetween GRE and VXLAN is only done in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. Important is part [agent], because it missing in openstack documentation.

[ovs] 

tunnel_type = vxlan // or gre
tenant_network_type = vxlan // or gre

tunnel_id_ranges = 1:1000
enable_tunneling = True


integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.13.18

[agent]
tunnel_types = gre //or vxlan

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

today we succesfully run the ML2 plugin with VXLAN on Ubuntu. There is no additional package for installation, because it is integrated inside of neutron-server. We have 1CTL, 1NTW, 2 CP nodes and for ML2 plugin must be configured only CTL node, where is neutron-server installed.

1) Edit /etc/default/neutron-server

# defaults for neutron-server
# path to config file corresponding to the core_plugin specified in
# neutron.conf
#NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini"

2) Setup service_plugins inside of /etc/neutron/neutron.conf

# Advanced service modules
# service_plugins =
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

3) Create file /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# type_drivers = local,flat,vlan,gre,vxlan
# Example: type_drivers = flat,vlan,gre,vxlan

type_drivers = vxlan,local

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan
tenant_network_types = vxlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
mechanism_drivers = openvswitch

[ml2_type_flat]
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# tunnel_id_ranges =
#tunnel_id_ranges = 1:1000

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
# vni_ranges =
#vni_ranges=400:500

vni_ranges=1:1000

# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

After this settings everything should work!

integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.13.18

[agent]
tunnel_types = gre //or vxlan

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

My question is how to setup same configuration with ML2 Plugin. Look at my question: https://ask.openstack.org/en/question/6695/ml2-neutron-plugin-installation-and-configuration/

I Installed OVS plugin with VXLAN and GRE. Both work similar even switching beetween GRE and VXLAN is only done in /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini. Important is part [agent], because it missing in openstack documentation.

[ovs] 

tunnel_type = vxlan // or gre
tenant_network_type = vxlan // or gre

tunnel_id_ranges = 1:1000
enable_tunneling = True


integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 10.0.13.18

[agent]
tunnel_types = gre //or vxlan

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

today we succesfully run the ML2 plugin with VXLAN on Ubuntu. There is no additional package for installation, because it is integrated inside of neutron-server. We have 1CTL, 1NTW, 2 CP nodes and for ML2 plugin must be configured only CTL node, where is neutron-server installed.

1) Edit /etc/default/neutron-server

# defaults for neutron-server
# path to config file corresponding to the core_plugin specified in
# neutron.conf
#NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini"
NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini"

2) Setup service_plugins inside of /etc/neutron/neutron.conf

# Advanced service modules
# service_plugins =
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin,neutron.services.firewall.fwaas_plugin.FirewallPlugin

3) Create file /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# type_drivers = local,flat,vlan,gre,vxlan
# Example: type_drivers = flat,vlan,gre,vxlan

type_drivers = vxlan,local

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan
tenant_network_types = vxlan

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
mechanism_drivers = openvswitch

[ml2_type_flat]
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# tunnel_id_ranges =
#tunnel_id_ranges = 1:1000

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
# vni_ranges =
#vni_ranges=400:500

vni_ranges=1:1000

# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

After this settings everything should work!

My question is how to setup same configuration with ML2 Plugin. Look at my question: https://ask.openstack.org/en/question/6695/ml2-neutron-plugin-installation-and-configuration/