accessing vm metadata fails with grizzly

asked 2013-06-10 06:51:48 -0500

sgoud gravatar image

updated 2013-06-13 18:43:55 -0500

smaffulli gravatar image

All,

After installing standard grizzly recently, I am seeing this issue: VM unable to get metadata info.

*cloudinit start running: Mon, 10 Jun 2013 10:01:35 +0000. up 4.05 seconds
20130610 10:01:35,726  util.py[WARNING]: 'http://169.254.169.254/20090404/metadata/instanceid' failed [50/120s]: socket timeout [timed out]
20130610 10:02:26,782  util.py[WARNING]: 'http://169.254.169.254/20090404/metadata/instanceid' failed [101/120s]: socket timeout [timed out]
20130610 10:02:44,803  util.py[WARNING]: 'http://169.254.169.254/20090404/metadata/instanceid' failed [119/120s]: socket timeout [timed out]
20130610 10:02:45,806  DataSourceEc2.py[CRITICAL]: giving up on md after 120 seconds
no instance data found in start*

From name space, I am able to reach controller and all rules seems to be fine.

root@server14:/etc/init.d# ip netns exec qrouter-71a89bc2-d2b5-45e4-b87f-1186e3665732 iptables-save | grep 169.254.169.254
-A quantum-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 8775
root@server14:/etc/init.d# 

root@server14:/etc/init.d# ip netns exec qrouter-71a89bc2-d2b5-45e4-b87f-1186e3665732 netstat -anp
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      5519/python     
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags       Type       State         I-Node   PID/Program name    Path
root@server14:/etc/init.d#

root@server14:/etc/init.d# ip netns exec qrouter-71a89bc2-d2b5-45e4-b87f-1186e3665732 ping 10.2.113.12
PING 10.2.113.12 (10.2.113.12) 56(84) bytes of data.
64 bytes from 10.2.113.12: icmp_req=1 ttl=64 time=0.299 ms
64 bytes from 10.2.113.12: icmp_req=2 ttl=64 time=0.064 ms

nova.conf content:

root@server14:/etc/init.d# cat /etc/nova/nova.conf 
[DEFAULT]
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/run/lock/nova
verbose=True
api_paste_config=/etc/nova/api-paste.ini
compute_scheduler_driver=nova.scheduler.simple.SimpleScheduler
s3_host=10.2.113.12
ec2_host=10.2.113.12
ec2_dmz_host=10.2.113.12
rabbit_host=10.2.113.12
nova_url=http://10.2.113.12:8774/v1.1/
sql_connection=mysql://novaUser:novaPass@10.2.113.12/nova
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf

# Auth
use_deprecated_auth=false
auth_strategy=keystone

# Imaging service
glance_api_servers=10.2.113.12:9292
image_service=nova.image.glance.GlanceImageService

# Vnc configuration
novnc_enabled=true
novncproxy_base_url=http://10.2.113.12:6080/vnc_auto.html
novncproxy_port=6080
vncserver_proxyclient_address=10.2.113.12
vncserver_listen=0.0.0.0

# Network settings
network_api_class=nova.network.quantumv2.api.API
quantum_url=http://10.2.113.12:9696
quantum_auth_strategy=keystone
quantum_admin_tenant_name=service
quantum_admin_username=quantum
quantum_admin_password=service_pass
quantum_admin_auth_url=http://10.2.113.12:35357/v2.0
libvirt_vif_driver=nova.virt.libvirt.vif.LibvirtHybridOVSBridgeDriver
linuxnet_interface_driver=nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver

#Metadata
service_quantum_metadata_proxy = True
quantum_metadata_proxy_shared_secret = helloOpenStack
metadata_host = 10.2.113.12
metadata_listen = 0.0.0.0
#metadata_listen_port = 8775

# Compute #
compute_driver=libvirt.LibvirtDriver

# Cinder #
volume_api_class=nova.volume.cinder ...
(more)
edit retag flag offensive close merge delete

Comments

I have the same issue. I am using GRE encapsulation between the compute and the network nodes. How did you manage to fix this problem? Thanks!

cmcaba gravatar imagecmcaba ( 2013-12-03 09:02:35 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2013-06-10 07:27:45 -0500

darragh-oreilly gravatar image

updated 2013-06-13 12:47:13 -0500

Not sure. Check 'netstat -an | grep 8775' on controller. Maybe you need 'enabled_apis=osapi_compute,metadata' in controller's nova.conf. Ensure that the nameserver proxy is disabled in the DHCP namespace. Check the nameserver and metadata agent logs.

Update: maybe this is not due to connectivity - the request is getting to nova and the response should return the same way. Why is nova-api taking so long? - 72 and 61 seconds. Maybe the metadata agent timed out waiting for it.

edit flag offensive delete link more

Comments

Oncontroller, netstat -an | grep 8775 tcp 0 0 0.0.0.0:8775 0.0.0.0:* LISTEN. I added enabled_apis, and i am not running any overlapping ips. in dhcp namespace "netstat -an" tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN What does that mean? how to disable quantum-ns-metadata-proxy? logs r empty

sgoud gravatar imagesgoud ( 2013-06-12 05:30:38 -0500 )edit

It seems that the ns proxy has also started in your DHCP namespace. I can't say if this is the one the instance is talking to. Check enable_metadata_network and enable_isolated_metadata are False in dhcp_agent.ini. You may need debug=True in l3_agent.ini, dhcp_agent.ini and quantum.conf.

darragh-oreilly gravatar imagedarragh-oreilly ( 2013-06-12 06:44:31 -0500 )edit

If dhcp is trying todo the metadata proxy, I think it pushes a route for 169.254 to the instance - so check the route table there.

darragh-oreilly gravatar imagedarragh-oreilly ( 2013-06-12 06:52:56 -0500 )edit

Seems My Quantum packages are 2013.1.1-0ubuntu1~cloud0_all.deb.. If i want to upgrade to 2013.1.2, 1. download *tar.gz from this link https://launchpad.net/quantum/+milestone/2013.1.2 2. how do i install these openstack packages ? python setup.py install ?

sgoud gravatar imagesgoud ( 2013-06-14 14:01:48 -0500 )edit
0

answered 2013-06-14 14:04:35 -0500

james.shimer gravatar image

I struggled with the metadata service for a couple of days. I was trying to use a physical network provider, but I couldn't get it to work. I had to give up and use the gre tunneling. As darragh-oreilly suggests quantum uses dhcp to set the route for 169.254.169.254 but requires tunneling. I followed the Opentstack Networking Administration Guide (bk-quantum-admin-guide.pdf) according the the demos apendix A, dhcp_agent.ini does seem to be key.

edit flag offensive delete link more

Comments

thanks for your inputs, I want to upgrade the quantum package to 2013.1.2. I got the *tar.gz. How do i install ? "just setup.py install or setup.py install <path>" ? my python packages path is /usr/lib/python2.7/dist-packages/

sgoud gravatar imagesgoud ( 2013-06-14 14:33:05 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-06-10 06:51:48 -0500

Seen: 2,439 times

Last updated: Jun 14 '13