Ask Your Question
1

Metadata query hanging [closed]

asked 2014-06-10 11:21:50 -0500

marcantonio gravatar image

updated 2014-06-11 09:51:40 -0500

I'm having issues with certain images not fully booting in my new OpenStack environment (my original question is here). I believe I've tracked the issue down to the Neutron metadata service.

From a cirros image (which boots up fine) I can successfully make most metadata queies:

$ curl http://169.254.169.254/2009-04-04/meta-data/
ami-id
ami-launch-index
...etc...

However, when performing the following query curl just hangs:

$ wget http://169.254.169.254/openstack/2013-10-17/meta_data.json

Is there something special about this query? Do I need to turn on an additional API somewhere?

Update: When I run a tcpdump from network node, I see the proper response going back to the instance's private IP. Could it not be getting there?

Below is my metadata_agent.ini:

[DEFAULT]
verbose = True
debug = True

auth_url = http://controller1:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = XXXXXXXXXXXXX
nova_metadata_ip = controller1
metadata_proxy_shared_secret = XXXXXXXXXXXX
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by koolhead17
close date 2014-06-17 07:43:42.855267

Comments

On F20 ( Neutron Server) this file looks like:-

[DEFAULT]
debug = False
auth_url = http://Controller:35357/v2.0
auth_region = RegionOne
auth_insecure = False
admin_tenant_name = services
admin_user = neutron
admin_password = 5f11f559abc94440
nova_metadata_ip = 192.168.1.127
nova_metadata_port = 8775
metadata_proxy_shared_secret =227f7bbc8b6f4f74
metadata_workers = 0
dbaxps gravatar imagedbaxps ( 2014-06-10 11:35:00 -0500 )edit

I see two differences
1.auth_url - port 35357
2. nova_metadata_port = XXXX matchig metadata_listen_port from nova.conf
This file is same on Neutron Server and Compute Node for IceHouse Setup with Neutron ML2&OVS&GRE

dbaxps gravatar imagedbaxps ( 2014-06-10 11:40:24 -0500 )edit

What is nova_metadata_ip, your controller?

marcantonio gravatar imagemarcantonio ( 2014-06-10 12:03:59 -0500 )edit

Yes, Controller is 192.168.1.127

dbaxps gravatar imagedbaxps ( 2014-06-10 12:18:05 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2014-06-10 20:47:14 -0500

marcantonio gravatar image

updated 2014-06-11 09:51:21 -0500

I finally found the issue. It was unrelated to the metadata service. It turns out that all large packets were not making it back to the instances. I found that, because I'm using GRE tunnels with Neutron, I had to lower the MTU on the instances. It's explained here. It was easy to test this by running ifconfig eth0 mtu 1400 from the instance and then retrying the failing query.

Thanks all.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2014-06-10 11:21:50 -0500

Seen: 1,565 times

Last updated: Jun 11 '14