Ask Your Question
5

Metadata request timed out issue on Icehouse

asked 2014-07-01 07:33:11 -0600

ankush grover gravatar image

updated 2014-07-01 14:34:25 -0600

rbowen gravatar image

Hi Friends,

I have configured Openstack Icehouse on Centos 6.5 64-bit through "packstack --allinone" command. I am able to run the instances but not able to connect to the instances through keypair on the floating IP. Whereas I am able to ping the Floating IP and connect to the cirros test image via Username & passowrd but not able to login onto Fedora 20 or 19 Images via keypairs. Got the below error in the logs of the instance through the Horizon Dashboard

[   64.078407] cloud-init[475]: 2014-07-01 07:32:12,768 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [50/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request timed out. (timeout=50.0)]
[  115.137845] cloud-init[475]: 2014-07-01 07:33:03,828 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [101/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request timed out. (timeout=50.0)]
[  133.159112] cloud-init[475]: 2014-07-01 07:33:21,849 - url_helper.py[WARNING]: Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [119/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Request timed out. (timeout=17.0)]
[  134.162079] cloud-init[475]: 2014-07-01 07:33:22,851 - DataSourceEc2.py[CRITICAL]: Giving up on md from ['http://169.254.169.254/2009-04-04/meta-data/instance-id'] after 120 seconds
[  134.170335] cloud-init[475]: 2014-07-01 07:33:22,857 - url_helper.py[WARNING]: Calling 'http://10.0.0.3//latest/meta-data/instance-id' failed [0/120s]: request error [HTTPConnectionPool(host='10.0.0.3', port=80): Max retries exceeded with url: //latest/meta-data/instance-id (Caused by <class 'socket.error'>: [Errno 111] Connection refused)]

Ran few commands to dig further but not able to understand where is the issue

ip netns exec qrouter-ec11a04f-9e2a-42a4-b5c0-519ab5139e09 iptables -S -t nat | grep 169.254

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697

ip netns exec qrouter-ec11a04f-9e2a-42a4-b5c0-519ab5139e09 netstat -anpt

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:9697                0.0.0.0:*                   LISTEN      3363/python

ps -efm | grep -i 3363

root      3363     1  0 16:56 ?        00:00:00 /usr/bin/python /usr/bin/neutron-ns-metadata-proxy --pid_file=/var/lib/neutron/external/pids/ec11a04f-9e2a-42a4-b5c0-519ab5139e09.pid --metadata_proxy_socket=/var/lib/neutron/metadata_proxy --router_id=ec11a04f-9e2a-42a4-b5c0-519ab5139e09 --state_path=/var/lib/neutron --metadata_port=9697 --verbose --log-file=neutron-ns-metadata-proxy-ec11a04f-9e2a-42a4-b5c0-519ab5139e09.log --log-dir=/var/log/neutron

But If I do netstat -atpn | grep 9697 nothing is returned.

netstat -atpn | grep -i 8775
tcp        0      0 0.0.0.0:8775                0.0.0.0:*                   LISTEN      2516/python

Status of various openstack services

openstack-ceilometer-alarm-evaluator (pid  2182) is running...
openstack-ceilometer-alarm-notifier (pid  2191) is running...
openstack-ceilometer-api (pid  2207) is running...
openstack-ceilometer-central (pid  2216) is running...
openstack-ceilometer-collector (pid  2226) is running...
openstack-ceilometer-compute (pid  2236) is running...
openstack-ceilometer-agent-notification (pid  2244) is running...
openstack-cinder-api (pid  2252) is running...
openstack-cinder-backup (pid  2260) is running...
openstack-cinder-scheduler (pid  2268) is running...
openstack-cinder-volume ...
(more)
edit retag flag offensive close merge delete

Comments

Run
service rabbitmq-server status
and post output

dbaxps gravatar imagedbaxps ( 2014-07-01 07:45:54 -0600 )edit

netstat -atpn | grep -i 8775
tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN 2516/python
$ ps -ef | grep 2516

dbaxps gravatar imagedbaxps ( 2014-07-01 07:48:04 -0600 )edit

$ls -l /etc/neutron

dbaxps gravatar imagedbaxps ( 2014-07-01 07:58:40 -0600 )edit

ps -efm | grep -i 2516

nova      2516     1  0 Jul01 ?        00:07:39 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2840  2516  0 Jul01 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2841  2516  0 Jul01 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2851  2516  0 Jul01 ?        00:00:10 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2852  2516  0 Jul01 ?        00:00:09 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2861  2516  0 Jul01 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log
nova      2862  2516  0 Jul01 ?        00:00:00 /usr/bin/python /usr/bin/nova-api --logfile /var/log/nova/api.log

There are no errors in ...(more)

ankush grover gravatar imageankush grover ( 2014-07-02 03:19:18 -0600 )edit

Pleasae post /etc/neutron/plugins/ml2/ml2_conf.ini

dbaxps gravatar imagedbaxps ( 2014-07-02 03:25:09 -0600 )edit

3 answers

Sort by ยป oldest newest most voted
7

answered 2014-08-26 09:24:09 -0600

luowei gravatar image

I had this problem (metadata request timeout) and it turned out the setting of auth_region in /etc/neutron/metadata-agent.ini on my network node did not exactly match what is defined in the db endpoint (check "keystone endpoint-list").

One was "regionOne" and the other was "RegionOne". I modified the .ini file value and restarted the metadata agent on my network node (service neutron-metadata-agent restart) and it no longer has the timeout failure.

edit flag offensive delete link more

Comments

Also had this issue and this was the solution. Thanks!

J. Broszat gravatar imageJ. Broszat ( 2014-10-06 03:47:18 -0600 )edit

I love you so much :) Thanks!!!

D20K gravatar imageD20K ( 2014-11-01 03:35:39 -0600 )edit

That happened to me as well. Thank you very much!!! It's so hard to find the kind of errors and you'd end up spending so much time looking for more serious problems!

SysFiller gravatar imageSysFiller ( 2014-11-28 10:15:05 -0600 )edit

Yup this worked for me... Without your answer would have been a touch more troublesome to figure out what went wrong. thanks! Luowei

Kartik gravatar imageKartik ( 2015-01-03 05:31:54 -0600 )edit
1

answered 2014-07-28 09:28:13 -0600

Tarciso Ferreira gravatar image

I had a similar problem, in my case I edited the file /etc/neutron/plugins/ml2/ml2.conf.ini, I commented the option enable_security_group = True ( #enable_security_group = True ) , then worked.

edit flag offensive delete link more

Comments

I have this exact issue and this answer didn't work for me. Though even if it did, how do you resolve this issue and still have security groups enabled?

Jet gravatar imageJet ( 2014-08-07 12:04:51 -0600 )edit
1

answered 2015-09-02 12:55:07 -0600

Matt Kynaston gravatar image

Old question, but it's #1 on google and I've just been through this with a CentOS6.7 image I built myself. The Cirros image got it's meta-data just fine.

Turns out the problem is with zero conf and the stuff it adds to routing. On cirros the routing table looks like:

$ route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.100.1   0.0.0.0         UG    0      0        0 eth0
192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0

On CentOS6 it looks like:

# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
0.0.0.0         192.168.100.1   0.0.0.0         UG    0      0        0 eth0

The trick is to disable the zero-conf stuff in /etc/sysconfig/network-scripts/ifcfg-eth0 on your instance:

echo 'NOZEROCONF="y"' >> /etc/sysconfig/network-scripts/ifcfg-eth0

After a restart, joy!

edit flag offensive delete link more

Comments

Confirming that we ran into this same problem and this solution worked fine.

slmingol gravatar imageslmingol ( 2015-09-25 08:51:29 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2014-07-01 07:33:11 -0600

Seen: 10,870 times

Last updated: Sep 02 '15