Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

How does compute node get IP from dnsmasq with Neutron linux bridge plugin?

Hi,

I am trying to setup multi-node stack (Controller and Neutron in one node, Compute in another node). I am using LinuxBridge plugin for my Neutron setup.

Currently I am using devstack to stack my nodes. Below are my local.conf:

# controller + neutron
[[local|localrc]]
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
SERVICE_TOKEN=xyzpdqlazydog
Q_PLUGIN=ml2
Q_AGENT=linuxbridge

RABBIT_HOST=10.0.1.2
HOST_IP=10.0.1.2
disable_service n-cpu
FLOATING_RANGE=10.0.2.0/24

VLAN_INTERFACE=eth1
FLAT_INTERFACE=eth1
GUEST_INTERFACE=eth1
PUBLIC_INTERFACE=eth2
FIXED_RANGE=172.16.1.0/24
NETWORK_GATEWAY=172.16.1.1
FIXED_NETWORK_SIZE=256


[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,vxlan
mechanism_drivers = linuxbridge

[ml2_type_vlan]
network_vlan_ranges = vlan_net1:1000:1999

[linux_bridge]
# in virtualbox these nics need promiscous mode enabled
physical_interface_mappings = vlan_net1:eth1

# compute
[[local|localrc]]
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
SERVICE_TOKEN=xyzpdqlazydog

ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
SERVICE_HOST=10.0.1.2
RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST
HOST_IP=10.0.1.3
Q_PLUGIN=ml2
Q_AGENT=linuxbridge
VNCSERVER_LISTEN=0.0.0.0
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP

[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,vxlan
mechanism_drivers = linuxbridge

[ml2_type_vlan]
network_vlan_ranges = vlan_net1:1000:1999

[linux_bridge]
# in virtualbox these nics need promiscous mode enabled
physical_interface_mappings = vlan_net1:eth1

And here is my network diagram of the nodes (VMs):

image description

Whereas eth1 is for internal network and eth2 is for external network (floating ip, etc)

Everything seems stack up fine: I can see a network namespace created correctly and all the services are up.

However, when I try to boot an instance, it seems that the instance failed to lease an IP.

I think the problem is that the compute node can't talk to the qrouter namespace so that it can't get a lease IP.

Did I setup something wrong?