How does compute node get IP from dnsmasq with Neutron linux bridge plugin?
Hi,
I am trying to setup multi-node stack (Controller and Neutron in one node, Compute in another node). I am using LinuxBridge plugin for my Neutron setup.
Currently I am using devstack to stack my nodes. Below are my local.conf:
# controller + neutron
[[local|localrc]]
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
SERVICE_TOKEN=xyzpdqlazydog
Q_PLUGIN=ml2
Q_AGENT=linuxbridge
RABBIT_HOST=10.0.1.2
HOST_IP=10.0.1.2
disable_service n-cpu
FLOATING_RANGE=10.0.2.0/24
VLAN_INTERFACE=eth1
FLAT_INTERFACE=eth1
GUEST_INTERFACE=eth1
PUBLIC_INTERFACE=eth2
FIXED_RANGE=172.16.1.0/24
NETWORK_GATEWAY=172.16.1.1
FIXED_NETWORK_SIZE=256
[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,vxlan
mechanism_drivers = linuxbridge
[ml2_type_vlan]
network_vlan_ranges = vlan_net1:1000:1999
[linux_bridge]
# in virtualbox these nics need promiscous mode enabled
physical_interface_mappings = vlan_net1:eth1
# compute
[[local|localrc]]
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
SERVICE_TOKEN=xyzpdqlazydog
ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
SERVICE_HOST=10.0.1.2
RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST
HOST_IP=10.0.1.3
Q_PLUGIN=ml2
Q_AGENT=linuxbridge
VNCSERVER_LISTEN=0.0.0.0
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP
[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,vxlan
mechanism_drivers = linuxbridge
[ml2_type_vlan]
network_vlan_ranges = vlan_net1:1000:1999
[linux_bridge]
# in virtualbox these nics need promiscous mode enabled
physical_interface_mappings = vlan_net1:eth1
And here is my network diagram of the nodes (VMs):
Whereas eth1 is for internal network and eth2 is for external network (floating ip, etc)
Everything seems stack up fine: I can see a network namespace created correctly and all the services are up.
However, when I try to boot an instance, it seems that the instance failed to lease an IP.
I think the problem is that the compute node can't talk to the qrouter namespace so that it can't get a lease IP.
Did I setup something wrong?
Can you post the console logs of the instance so we can see the failure. I would also recommend to use the Cirros test image for testing http://docs.openstack.org/image-guide... is there anything interesting in your nova and neutron logs when launching?
I don't have the full log anymore but it fails when the VM get the IP:
I don't use devstack much but is 'FLAT_INTERFACE=eth1' an option for nova networking?
Use
tcpdump -i eth1 "port 67 or port 68"
on both the compute host and the controller to see if you dhcp requests are being properly routed through eth1. If they are you can do the same on the namespace for the dhcp-agent.Are these physical nodes? If so the switch port that eth1 is connected to will need to be a trunk port that's able to pass vlans ids 1000 to 1999.