Neutron DHCP agent cannot ping gateway IP

asked 2014-04-13 07:50:37 -0600

Y Sertdemir gravatar image


I am trying to configure neutron flat network configuration for multi-host environmen. I am using Ubuntu 12.04 saucy kernel, havana packages. I followed openstack document on simple install networking

Ubuntu image can get IP in first creation, but it cannot get metadata. At console.log it displays timeout error message.

ci-info: eth0  : 1   fa:16:3e:ef:1d:cd
ci-info: route-0:         eth0   UG
ci-info: route-1:   eth0   U
cloud-init start running: Sun, 13 Apr 2014 12:21:36 +0000. up 3.60 seconds
2014-04-13 12:22:27,617 -[WARNING]: '' failed [50/120s]: url error [timed out]
2014-04-13 12:23:18,670 -[WARNING]: '' failed [101/120s]: url error [timed out]
2014-04-13 12:23:36,693 -[WARNING]: '' failed [119/120s]: url error [timed out]
2014-04-13 12:23:37,697 -[CRITICAL]: giving up on md after 120 seconds

Because of this error, ubuntu image cannot generate username and password randomly in first creation.

I investigated the networking node which runs neutron dhcp agent and plugin agent;

I tried to ping from dhcp agent to physical gateway, no success:

ip netns exec qdhcp-3b1f3f1d-f64a-4573-92eb-9ae7ab5d0986 ping

I tried to ping from dhcp agent to an instance, no success:

ip netns exec qdhcp-3b1f3f1d-f64a-4573-92eb-9ae7ab5d0986 ping

I tried to ping to dhcp agent from same subnet, no success:


When instance has IP, it is reachable from outside.

PING ( 56(84) bytes of data.
64 bytes from icmp_req=1 ttl=64 time=2.00 ms
64 bytes from icmp_req=2 ttl=64 time=0.564 ms
64 bytes from icmp_req=3 ttl=64 time=0.374 ms

My ovs-vsctl show output on network node:

Bridge br-int
Port "int-br-eth2"
        Interface "int-br-eth2"
    Port br-int
        Interface br-int
            type: internal
    Port "tapb39f3c76-43"
        tag: 1
        Interface "tapb39f3c76-43"
            type: internal
Bridge "br-eth2"
    Port "br-eth2"
        Interface "br-eth2"
            type: internal
    Port "eth2"
        Interface "eth2"
    Port "phy-br-eth2"
        Interface "phy-br-eth2"
ovs_version: "1.10.2"

How can I fix metadata issue and fix the dhcp agent to reach outside router?

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2014-04-13 12:17:16 -0600

rnorton gravatar image

Can you pastebin your neutron config files (including dhcp_agent, l3, etc). Does the network node and the instance live on the same host?

edit flag offensive delete link more

answered 2014-04-14 07:37:01 -0600

Y Sertdemir gravatar image

updated 2014-04-14 07:39:13 -0600

I have a controller server, networking server and 3 compute host. They all in different servers. My setup is flat network so there is no L3 router. There is a single network and shared.

The configs files: dhcp_agent:

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

neutron.conf on Network Node:

core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host =
rabbit_userid = xxxxxxxxxxxxxxxxx
rabbit_password = xxxxxxxxxxxx
notification_driver = neutron.openstack.common.notifier.rpc_notifier

auth_url =
auth_host =
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = xxxxx
signing_dir = $state_path/keystone-signing

connection = mysql://xxxxxx:xxxx@

Neutron OVS Plugin config:

network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth2
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

connection = mysql://neutron:Neutron159@
edit flag offensive delete link more


I found out that, instances send packets to our physical router. There are logs in the router that instance trys to reach IP but it fails of course.

Y Sertdemir gravatar imageY Sertdemir ( 2014-04-15 00:40:47 -0600 )edit

Progress: I realize that, my network node is virtual machine in a vmware cluster. In vCenter, promiscuous is disabled in default settings. I enabled the promiscuous in the port and not my dhcp agent can ping outside. My metadata problem still continues.

Y Sertdemir gravatar imageY Sertdemir ( 2014-04-15 08:08:05 -0600 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2014-04-13 07:50:37 -0600

Seen: 1,927 times

Last updated: Apr 14 '14