Ask Your Question

metadata service not reachable from instance in neutron single flat provider network

asked 2014-06-15 03:31:15 -0500

LainesStack gravatar image


I have the following Icehouse setup and using neutron with ML2 and the OVS Plugin in a single flat provider network:

  • Controler Node:
  • Network Node:
  • Compute Node:

Network is working fine. The Instances get a IP from the DHCP with the correct getaway and DNS settings. But the metadata service is not reachable from the instance.

Here the log from the CirrOS Image.

adding dns
adding dns
adding net with router
cirros-ds 'net' up at 0.72
failed 1/20: up 0.73. request failed
failed 2/20: up 12.75. request failed
failed 3/20: up 24.76. request failed

Inside the CirrOS Instance:

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface         UG    0      0        0 eth0   U     0      0        0 eth0 UGH   0      0        0 eth0
$ ping
PING ( 56 data bytes
64 bytes from seq=0 ttl=64 time=1.199 ms
64 bytes from seq=1 ttl=64 time=0.235 ms
64 bytes from seq=2 ttl=64 time=0.252 ms
--- ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.235/0.562/1.199 ms
$ curl
curl: (7) couldn't connect to host

From the Controller Node I can reach the metadata service with the own IP and the service is listening:


  <title>500 Internal Server Error</title>
  <h1>500 Internal Server Error</h1>
  Remote metadata server experienced an internal server error.<br /><br />

Network Node:

cat /etc/neutron/neutron.conf | grep "^[^#;\[]"
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = controller
core_plugin = ml2
service_plugins = router
verbose = True
debug = False
allow_overlapping_ips = True
auth_uri = http://controller:5000
auth_host = controller
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = xxx

cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep "^[^#;\[]"
type_drivers = local,flat
mechanism_drivers = openvswitch,l2population
flat_networks = *
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_tunneling = False
local_ip =
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth1

cat /etc/neutron/metadata_agent.ini | grep "^[^#;\[]"
debug = True
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = xxx
nova_metadata_ip =
metadata_proxy_shared_secret = xxx

cat /etc/neutron/dhcp_agent.ini | grep "^[^#;\[]"
debug = True
interface_driver = neutron.agent.linux ...
edit retag flag offensive close merge delete


service_neutron_metadata_proxy needs to be True in nova.conf

darragh-oreilly gravatar imagedarragh-oreilly ( 2014-06-15 07:53:17 -0500 )edit

Now a different error:

2014-06-15 13:59:57,645 -[WARNING]: Calling '' failed [50/120s]: request error [HTTPConnectionPool(host='', port=80): Request timed out. (timeout=50.0)]
LainesStack gravatar imageLainesStack ( 2014-06-15 13:22:47 -0500 )edit

Could it be a problem the Controller node is running on a VM on Hyper-V? On Hyper-V its not possible to have a promiscuous mode.

LainesStack gravatar imageLainesStack ( 2014-06-16 02:57:33 -0500 )edit

don't think so. You should check the logs. There shoukd be a ns-metadata-proxy for the namespace, a metadata-agent and the dhcp-agent log.

darragh-oreilly gravatar imagedarragh-oreilly ( 2014-06-16 14:31:34 -0500 )edit

I found the logs and had a look, there is nothing about any connection. Its seems like the request never gets to the meta_proxy or the network node.

Here a list of the iptables rules:

Controller Node:

iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 90458 packets, 3790K bytes)
 pkts bytes target     prot opt in     out     source               destination
90458 3790K nova-api-PREROUTING  all  --  *      *  

Chain POSTROUTING (policy ACCEPT 5699 packets, 368K bytes)
 pkts bytes target     prot opt in     out     source               destination
 5699  368K nova-api-POSTROUTING  all  --  *      *  
 5699  368K nova-postrouting-bottom  all  --  *      *  

Chain OUTPUT (policy ACCEPT 5699 packets, 368K bytes)
 pkts bytes target     prot opt in     out     source               destination
 5699  368K nova-api-OUTPUT  all  --  *      *   ...
LainesStack gravatar imageLainesStack ( 2014-06-17 03:05:17 -0500 )edit

2 answers

Sort by ยป oldest newest most voted

answered 2014-08-05 08:41:37 -0500

yHuKyM gravatar image

The problem is, that [metadata server virtual ip] is not accessible from the physical router on the network. That metadata server lives in a netnamespace on the network node - not on the physical router. This has been thought through in Icehouse, and the traffic to can now be routed trough the DHCP server.

Here is my sollution: 1. Edit /etc/neutron/dhcp-agent.ini with:

enable_isolated_metadata = True

  1. Stop all agents and neutron-server on the network node

  2. $ sudo killall dnsmasq

  3. Start all the agents on the network node

  4. Enjoy your network with metadata server. In my case the routing table of the instance looks like this:

    $ route -n

    Kernel IP routing table

    Destination Gateway Genmask Flags Metric Ref Use Iface UG 0 0 0 eth0 U 0 0 0 eth0 UGH 0 0 0 eth0

  5. If you are using cirros, please update to latest release.

More info: (

edit flag offensive delete link more


Thank you, Works

mpa360 gravatar imagempa360 ( 2015-12-07 10:08:06 -0500 )edit

Thanks, solution worked for me but after every reboot same steps to be followed. Any other permanent solution without restarting any services?

Harsha gravatar imageHarsha ( 2017-04-10 06:21:20 -0500 )edit

answered 2014-06-18 21:39:38 -0500

Deeptanshu gravatar image


I am also stuck at same point and trying from many days.

Have you get ant solution ?

edit flag offensive delete link more


Unfortunately no, I really despair on it. Before I had the Controller Node on a VM, now I installed everything from scratch on physical machines with the same result...

LainesStack gravatar imageLainesStack ( 2014-06-18 21:57:55 -0500 )edit


I have installed all with the scratch many times and every time issue arises.

Hope it will help !

Deeptanshu gravatar imageDeeptanshu ( 2014-06-18 22:00:34 -0500 )edit


What is the meaning of below line. Below line is trying to connect with nova-api ?


failed 1/20: up 7.41. request failed

failed 2/20: up 9.95. request failed

failed 3/20: up 12.33. request failed

failed 4/20: up 14.73. request failed

failed 5/20: up 17.15. request failed

As per my observation there is some IPTABLE rule stopping this to connect.

Deeptanshu gravatar imageDeeptanshu ( 2014-06-18 22:04:13 -0500 )edit

The Instance tries to connect to the meta service. IPTABLES is my guess also, especially I see no errors in the log of the services.

LainesStack gravatar imageLainesStack ( 2014-06-18 22:31:42 -0500 )edit

Hi, As per the ubuntu docs , meta service is installed on controller as a nova-api.

Am i right ?

Deeptanshu gravatar imageDeeptanshu ( 2014-06-18 22:42:57 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2014-06-15 03:31:15 -0500

Seen: 23,721 times

Last updated: Jun 18 '14