Ask Your Question
2

metadata service not reachable from instance in neutron single flat provider network

asked 2014-06-15 03:31:15 -0500

LainesStack gravatar image

Hello

I have the following Icehouse setup and using neutron with ML2 and the OVS Plugin in a single flat provider network:

  • Controler Node: 10.117.200.7
  • Network Node: 10.117.200.10
  • Compute Node: 10.117.200.12

Network is working fine. The Instances get a IP from the DHCP with the correct getaway and DNS settings. But the metadata service is not reachable from the instance.

Here the log from the CirrOS Image.

adding dns 10.117.192.145
adding dns 10.117.192.160
adding net 169.254.169.254/32 with router 10.117.200.50
cirros-ds 'net' up at 0.72
checking http://169.254.169.254/2009-04-04/instance-id
failed 1/20: up 0.73. request failed
failed 2/20: up 12.75. request failed
failed 3/20: up 24.76. request failed

Inside the CirrOS Instance:

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.117.200.254  0.0.0.0         UG    0      0        0 eth0
10.117.200.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.169.254 10.117.200.50   255.255.255.255 UGH   0      0        0 eth0
$ ping 169.254.169.254
PING 169.254.169.254 (169.254.169.254): 56 data bytes
64 bytes from 169.254.169.254: seq=0 ttl=64 time=1.199 ms
64 bytes from 169.254.169.254: seq=1 ttl=64 time=0.235 ms
64 bytes from 169.254.169.254: seq=2 ttl=64 time=0.252 ms
^C
--- 169.254.169.254 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.235/0.562/1.199 ms
$ curl http://169.254.169.254
curl: (7) couldn't connect to host

From the Controller Node I can reach the metadata service with the own IP and the service is listening:

curl http://10.117.200.7:8775
1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04

curl http://169.254.169.254
<html>
 <head>
  <title>500 Internal Server Error</title>
 </head>
 <body>
  <h1>500 Internal Server Error</h1>
  Remote metadata server experienced an internal server error.<br /><br />
 </body>
</html>

Network Node:

cat /etc/neutron/neutron.conf | grep "^[^#;\[]"
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = controller
core_plugin = ml2
service_plugins = router
verbose = True
debug = False
allow_overlapping_ips = True
auth_uri = http://controller:5000
auth_host = controller
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = xxx

cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep "^[^#;\[]"
type_drivers = local,flat
mechanism_drivers = openvswitch,l2population
flat_networks = *
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_tunneling = False
local_ip = 10.117.200.11
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-eth1

cat /etc/neutron/metadata_agent.ini | grep "^[^#;\[]"
debug = True
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = xxx
nova_metadata_ip = 10.117.200.7
metadata_proxy_shared_secret = xxx

cat /etc/neutron/dhcp_agent.ini | grep "^[^#;\[]"
debug = True
interface_driver = neutron.agent.linux ...
(more)
edit retag flag offensive close merge delete

Comments

service_neutron_metadata_proxy needs to be True in nova.conf

darragh-oreilly gravatar imagedarragh-oreilly ( 2014-06-15 07:53:17 -0500 )edit

Now a different error:

2014-06-15 13:59:57,645 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request timed out. (timeout=50.0)]
LainesStack gravatar imageLainesStack ( 2014-06-15 13:22:47 -0500 )edit

Could it be a problem the Controller node is running on a VM on Hyper-V? On Hyper-V its not possible to have a promiscuous mode.

LainesStack gravatar imageLainesStack ( 2014-06-16 02:57:33 -0500 )edit

don't think so. You should check the logs. There shoukd be a ns-metadata-proxy for the namespace, a metadata-agent and the dhcp-agent log.

darragh-oreilly gravatar imagedarragh-oreilly ( 2014-06-16 14:31:34 -0500 )edit

I found the logs and had a look, there is nothing about any connection. Its seems like the request never gets to the meta_proxy or the network node.

Here a list of the iptables rules:

Controller Node:

iptables -t nat -L -n -v
Chain PREROUTING (policy ACCEPT 90458 packets, 3790K bytes)
 pkts bytes target     prot opt in     out     source               destination
90458 3790K nova-api-PREROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain POSTROUTING (policy ACCEPT 5699 packets, 368K bytes)
 pkts bytes target     prot opt in     out     source               destination
 5699  368K nova-api-POSTROUTING  all  --  *      *       0.0.0.0/0            0.0.0.0/0
 5699  368K nova-postrouting-bottom  all  --  *      *       0.0.0.0/0            0.0.0.0/0

Chain OUTPUT (policy ACCEPT 5699 packets, 368K bytes)
 pkts bytes target     prot opt in     out     source               destination
 5699  368K nova-api-OUTPUT  all  --  *      *       0.0.0.0/0            0.0.0.0/0 ...
(more)
LainesStack gravatar imageLainesStack ( 2014-06-17 03:05:17 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
2

answered 2014-08-05 08:41:37 -0500

yHuKyM gravatar image

The problem is, that 169.254.169.254 [metadata server virtual ip] is not accessible from the physical router on the network. That metadata server lives in a netnamespace on the network node - not on the physical router. This has been thought through in Icehouse, and the traffic to 169.254.169.254 can now be routed trough the DHCP server.

Here is my sollution: 1. Edit /etc/neutron/dhcp-agent.ini with:

enable_isolated_metadata = True

  1. Stop all agents and neutron-server on the network node

  2. $ sudo killall dnsmasq

  3. Start all the agents on the network node

  4. Enjoy your network with metadata server. In my case the routing table of the instance looks like this:

    $ route -n

    Kernel IP routing table

    Destination Gateway Genmask Flags Metric Ref Use Iface

    0.0.0.0 31.13.250.1 0.0.0.0 UG 0 0 0 eth0

    31.13.250.0 0.0.0.0 255.255.254.0 U 0 0 0 eth0

    169.254.169.254 31.13.250.12 255.255.255.255 UGH 0 0 0 eth0

  5. If you are using cirros, please update to latest release.

More info: https://ask.openstack.org/en/question/27505/metadata-problem-in-neutron-flat-network/ (https://ask.openstack.org/en/question...)

edit flag offensive delete link more

Comments

Thank you, Works

mpa360 gravatar imagempa360 ( 2015-12-07 10:08:06 -0500 )edit

Thanks, solution worked for me but after every reboot same steps to be followed. Any other permanent solution without restarting any services?

Harsha gravatar imageHarsha ( 2017-04-10 06:21:20 -0500 )edit
0

answered 2014-06-18 21:39:38 -0500

Deeptanshu gravatar image

Hi,

I am also stuck at same point and trying from many days.

Have you get ant solution ?

edit flag offensive delete link more

Comments

Unfortunately no, I really despair on it. Before I had the Controller Node on a VM, now I installed everything from scratch on physical machines with the same result...

LainesStack gravatar imageLainesStack ( 2014-06-18 21:57:55 -0500 )edit

Hi,

I have installed all with the scratch many times and every time issue arises.

Hope it will help !

http://docs.openstack.org/admin-guide...

Deeptanshu gravatar imageDeeptanshu ( 2014-06-18 22:00:34 -0500 )edit

Hi,

What is the meaning of below line. Below line is trying to connect with nova-api ?

checking http://169.254.169.254/2009-04-04/ins...

failed 1/20: up 7.41. request failed

failed 2/20: up 9.95. request failed

failed 3/20: up 12.33. request failed

failed 4/20: up 14.73. request failed

failed 5/20: up 17.15. request failed

As per my observation there is some IPTABLE rule stopping this to connect.

Deeptanshu gravatar imageDeeptanshu ( 2014-06-18 22:04:13 -0500 )edit

The Instance tries to connect to the meta service. IPTABLES is my guess also, especially I see no errors in the log of the services.

LainesStack gravatar imageLainesStack ( 2014-06-18 22:31:42 -0500 )edit

Hi, As per the ubuntu docs , meta service is installed on controller as a nova-api.

Am i right ?

Deeptanshu gravatar imageDeeptanshu ( 2014-06-18 22:42:57 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-06-15 03:31:15 -0500

Seen: 21,366 times

Last updated: Jun 18 '14