Neutron/OVS VLAN-tagging of DHCP requests?
Hi guys,
I'm having trouble figuring out why a Cirros test VM on my compute node is not obtaining a DHCP address from my controller node, over a Neutron/OVS/VLAN arrangement. So, the scenario in more detail...
2x DELL PowerEdge R420s, running stock Ubuntu 13.10 Saucy (OpenStack Havana). 2x NetGear GS724 switches
Primary switch is for 192.168.1.0/24 traffic, with gateway router at 1.1. Both DELLs connected via their primary (em0) interface.
Secondary switch is for 10.0.0.0/24 traffic, with no gateway. Both DELLs connected via their secondary (em1) interface.
First DELL is called 'controller1'. Second DELL is called 'compute1'. I'll focus on the compute node's configuration for now, as I'm having trouble tracing where the DHCP request packet goes within this domain.
First, the underlying /etc/network/interfaces network config...
# The loopback network interface auto lo iface lo inet loopback # The primary network interface auto em1 iface em1 inet static address 0.0.0.0 # The primary network interface auto br-ex iface br-ex inet static address 192.168.1.11 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 192.168.1.1 auto em2 iface em2 inet static address 0.0.0.0 auto br-int iface br-int inet static address 10.0.0.11 netmask 255.255.255.0
OVS bridges configured as per 'ovs-vsctl show'...
b8840243-e3a1-43de-badb-cfbe0ce1405c Bridge br-int Port int-br-ex Interface int-br-ex Port "em2" Interface "em2" Port phy-br-int Interface phy-br-int Port br-int Interface br-int type: internal Port "tapc015b393-e7" tag: 3 Interface "tapc015b393-e7" Port int-br-int Interface int-br-int Bridge br-ex Port phy-br-ex Interface phy-br-ex Port "em1" Interface "em1" Port br-ex Interface br-ex type: internal ovs_version: "1.10.2"
The tap interface belongs to my 'test1' Cirros guest, and I can see DHCP request packets on it if I tcpdump it as the VM boots.
I have the OpenVSwitch (OVS) plugin configured as follows:
[ovs] tenant_network_type = vlan network_vlan_ranges = physnet2:1000:2999 local_ip = 10.0.0.11 bridge_mappings = physnet2:br-int [agent] [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [database] connection=(snipped)
FTR, the controller node is the same, with the local_ip being 10.0.0.11
On Neutron, there is a virtual 'testnet':
+---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | c7497b77-a716-4ad6-8d44-3fa9b2dcfaf0 | | name | testnet | | provider:network_type | vlan | | provider:physical_network | physnet2 | | provider:segmentation_id | 1001 | | router:external | False | | shared | False | | status | ACTIVE | | subnets | 8821440b-839b-4e91-85ca-1f2651ed6896 | | tenant_id | 83b3ca4a4ec94070904d5112aeb1baab | +---------------------------+--------------------------------------+
And a virtual 'testsubnet':
+------------------+--------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------+ | allocation_pools | {"start": "192.168.9.2", "end": "192.168.9.254"} | | cidr | 192.168.9.0/24 | | dns_nameservers | 8.8.8.8 | | enable_dhcp | True | | gateway_ip | 192.168.9.1 | | host_routes | | | id | 8821440b-839b-4e91-85ca-1f2651ed6896 | | ip_version | 4 | | name | testsubnet | | network_id | c7497b77-a716-4ad6-8d44-3fa9b2dcfaf0 | | tenant_id | 83b3ca4a4ec94070904d5112aeb1baab | +------------------+--------------------------------------------------+
My confusion seems to arise from not being quite clear about where the VLAN id is being set and what to exactly, so I can trace the packet beyond the tap interface.
It seems I am not quite grokking the relationship between the 'provider:segmentation_id', and the OVS 'tag id'. The segmentation_id seems to have been correctly taken from ...
I have the same problem and same question. Our setup is a little different with a XenServer virtualization host, controller node (keystone, glance, nova-api, etc.), compute node (nova-compute), and network node (neutron). I want to manually configure the vlan tag on the cirros test vm for testing...
...but I can't find any docs on cirros network configuration. I think I might be able to make some progress with that documentation in hand. I'm looking into the possibility of other images to test with that natively support vlans.
Ultimately, our situation is this bug: https://bugs.launchpad.net/neutron/+bug/1268955
Your bridge_mapping is using br-int, it should be using the bridge for mapping to the physical, br-ex I believe in your case. Also br-int should not be configured with IP address, and why does it have an em2? I'm not keen on configuring br-ex with IPs either. What guide says to do all this?
Thanks, darrah-oreilly. You helped nudge me in the right direction there.