Ask Your Question
1

How does compute node get IP from dnsmasq with Neutron linux bridge plugin?

asked 2015-04-22 14:43:09 -0600

anonymous user

Anonymous

Hi,

I am trying to setup multi-node stack (Controller and Neutron in one node, Compute in another node). I am using LinuxBridge plugin for my Neutron setup.

Currently I am using devstack to stack my nodes. Below are my local.conf:

# controller + neutron
[[local|localrc]]
disable_service n-net
enable_service q-svc
enable_service q-agt
enable_service q-dhcp
enable_service q-l3
enable_service q-meta
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
SERVICE_TOKEN=xyzpdqlazydog
Q_PLUGIN=ml2
Q_AGENT=linuxbridge

RABBIT_HOST=10.0.1.2
HOST_IP=10.0.1.2
disable_service n-cpu
FLOATING_RANGE=10.0.2.0/24

VLAN_INTERFACE=eth1
FLAT_INTERFACE=eth1
GUEST_INTERFACE=eth1
PUBLIC_INTERFACE=eth2
FIXED_RANGE=172.16.1.0/24
NETWORK_GATEWAY=172.16.1.1
FIXED_NETWORK_SIZE=256


[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,vxlan
mechanism_drivers = linuxbridge

[ml2_type_vlan]
network_vlan_ranges = vlan_net1:1000:1999

[linux_bridge]
# in virtualbox these nics need promiscous mode enabled
physical_interface_mappings = vlan_net1:eth1

# compute
[[local|localrc]]
ADMIN_PASSWORD=supersecret
MYSQL_PASSWORD=iheartdatabases
RABBIT_PASSWORD=flopsymopsy
SERVICE_PASSWORD=iheartksl
SERVICE_TOKEN=xyzpdqlazydog

ENABLED_SERVICES=n-cpu,rabbit,neutron,q-agt
SERVICE_HOST=10.0.1.2
RABBIT_HOST=$SERVICE_HOST
Q_HOST=$SERVICE_HOST
HOST_IP=10.0.1.3
Q_PLUGIN=ml2
Q_AGENT=linuxbridge
VNCSERVER_LISTEN=0.0.0.0
VNCSERVER_PROXYCLIENT_ADDRESS=$HOST_IP

[[post-config|/$Q_PLUGIN_CONF_FILE]]
[ml2]
tenant_network_types = vlan
type_drivers = local,flat,vlan,vxlan
mechanism_drivers = linuxbridge

[ml2_type_vlan]
network_vlan_ranges = vlan_net1:1000:1999

[linux_bridge]
# in virtualbox these nics need promiscous mode enabled
physical_interface_mappings = vlan_net1:eth1

And here is my network diagram of the nodes (VMs):

image description

Whereas eth1 is for internal network and eth2 is for external network (floating ip, etc)

Everything seems stack up fine: I can see a network namespace created correctly and all the services are up.

However, when I try to boot an instance, it seems that the instance failed to lease an IP.

I think the problem is that the compute node can't talk to the qrouter namespace so that it can't get a lease IP.

Did I setup something wrong?

edit retag flag offensive close merge delete

Comments

Can you post the console logs of the instance so we can see the failure. I would also recommend to use the Cirros test image for testing http://docs.openstack.org/image-guide... is there anything interesting in your nova and neutron logs when launching?

omar-munoz gravatar imageomar-munoz ( 2015-04-22 14:49:42 -0600 )edit

I don't have the full log anymore but it fails when the VM get the IP:

Sending discover...
Sending discover...
Sending discover...
No lease, failing
WARN: /etc/rc3.d/S40network failed
ka gravatar imageka ( 2015-04-22 15:02:31 -0600 )edit

I don't use devstack much but is 'FLAT_INTERFACE=eth1' an option for nova networking?

omar-munoz gravatar imageomar-munoz ( 2015-04-22 16:47:52 -0600 )edit

Use tcpdump -i eth1 "port 67 or port 68" on both the compute host and the controller to see if you dhcp requests are being properly routed through eth1. If they are you can do the same on the namespace for the dhcp-agent.

jdexter gravatar imagejdexter ( 2015-04-24 11:03:27 -0600 )edit

Are these physical nodes? If so the switch port that eth1 is connected to will need to be a trunk port that's able to pass vlans ids 1000 to 1999.

darragh-oreilly gravatar imagedarragh-oreilly ( 2015-04-28 00:46:45 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-04-28 00:31:14 -0600

dbaxps gravatar image

I understand that it's not addressing your question, however take a look at
Using DevStack with neutron Networking
http://docs.openstack.org/developer/d...
Official guide relays on openvswitch mechanism.

In this example, the nodes that will host guest instances will run the neutron-openvswitch-agent for network connectivity, as well as the compute service nova-compute.
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-04-22 14:43:09 -0600

Seen: 616 times

Last updated: Apr 28 '15