Ask Your Question
0

Neutron DHCP agent cannot ping gateway IP

asked 2014-04-13 07:50:37 -0500

Y Sertdemir gravatar image

Hello

I am trying to configure neutron flat network configuration for multi-host environmen. I am using Ubuntu 12.04 saucy kernel, havana packages. I followed openstack document on simple install networking http://docs.openstack.org/havana/inst...

Ubuntu image can get IP in first creation, but it cannot get metadata. At console.log it displays timeout error message.

ci-info: eth0  : 1 172.32.101.52   255.255.255.0   fa:16:3e:ef:1d:cd
ci-info: route-0: 0.0.0.0         172.32.101.254  0.0.0.0         eth0   UG
ci-info: route-1: 172.32.101.0    0.0.0.0         255.255.255.0   eth0   U
cloud-init start running: Sun, 13 Apr 2014 12:21:36 +0000. up 3.60 seconds
2014-04-13 12:22:27,617 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: url error [timed out]
2014-04-13 12:23:18,670 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: url error [timed out]
2014-04-13 12:23:36,693 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: url error [timed out]
2014-04-13 12:23:37,697 - DataSourceEc2.py[CRITICAL]: giving up on md after 120 seconds

Because of this error, ubuntu image cannot generate username and password randomly in first creation.

I investigated the networking node which runs neutron dhcp agent and plugin agent;

I tried to ping from dhcp agent to physical gateway, no success:

ip netns exec qdhcp-3b1f3f1d-f64a-4573-92eb-9ae7ab5d0986 ping 172.32.101.254

I tried to ping from dhcp agent to an instance, no success:

ip netns exec qdhcp-3b1f3f1d-f64a-4573-92eb-9ae7ab5d0986 ping 172.32.101.52

I tried to ping to dhcp agent from same subnet, no success:

ping 172.32.101.50

When instance has IP, it is reachable from outside.

  ping 172.32.101.52
PING 172.32.101.52 (172.32.101.52) 56(84) bytes of data.
64 bytes from 172.32.101.52: icmp_req=1 ttl=64 time=2.00 ms
64 bytes from 172.32.101.52: icmp_req=2 ttl=64 time=0.564 ms
64 bytes from 172.32.101.52: icmp_req=3 ttl=64 time=0.374 ms

My ovs-vsctl show output on network node:

Bridge br-int
Port "int-br-eth2"
        Interface "int-br-eth2"
    Port br-int
        Interface br-int
            type: internal
    Port "tapb39f3c76-43"
        tag: 1
        Interface "tapb39f3c76-43"
            type: internal
Bridge "br-eth2"
    Port "br-eth2"
        Interface "br-eth2"
            type: internal
    Port "eth2"
        Interface "eth2"
    Port "phy-br-eth2"
        Interface "phy-br-eth2"
ovs_version: "1.10.2"

How can I fix metadata issue and fix the dhcp agent to reach outside router?

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-04-14 07:37:01 -0500

Y Sertdemir gravatar image

updated 2014-04-14 07:39:13 -0500

I have a controller server, networking server and 3 compute host. They all in different servers. My setup is flat network so there is no L3 router. There is a single network and shared.

The configs files: dhcp_agent:

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

neutron.conf on Network Node:

core_plugin = neutron.plugins.openvswitch.ovs_neutron_plugin.OVSNeutronPluginV2
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = 172.25.166.1
rabbit_userid = xxxxxxxxxxxxxxxxx
rabbit_password = xxxxxxxxxxxx
notification_driver = neutron.openstack.common.notifier.rpc_notifier

[keystone_authtoken]
auth_url = http://172.25.166.1:35357/v2.0
auth_host = 172.25.166.1
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = xxxxx
signing_dir = $state_path/keystone-signing

[database]
connection = mysql://xxxxxx:xxxx@172.25.166.1/neutron

Neutron OVS Plugin config:

network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth2
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[DATABASE]
connection = mysql://neutron:Neutron159@172.25.166.1/neutron
edit flag offensive delete link more

Comments

I found out that, instances send packets to our physical router. There are logs in the router that instance trys to reach 169.254.169.254:80 IP but it fails of course.

Y Sertdemir gravatar imageY Sertdemir ( 2014-04-15 00:40:47 -0500 )edit

Progress: I realize that, my network node is virtual machine in a vmware cluster. In vCenter, promiscuous is disabled in default settings. I enabled the promiscuous in the port and not my dhcp agent can ping outside. My metadata problem still continues.

Y Sertdemir gravatar imageY Sertdemir ( 2014-04-15 08:08:05 -0500 )edit
0

answered 2014-04-13 12:17:16 -0500

rnorton gravatar image

Can you pastebin your neutron config files (including dhcp_agent, l3, etc). Does the network node and the instance live on the same host?

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-04-13 07:50:37 -0500

Seen: 1,690 times

Last updated: Apr 14 '14