Ask Your Question
0

waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id

asked 2011-02-12 00:31:57 -0500

everett-toews gravatar image

I'm running an instance but I can't ssh or ping it. When I do a euca-get-console-output I get the following,

... [ 0.728227] EXT3-fs: mounted filesystem with ordered data mode. [ 0.729514] VFS: Mounted root (ext3 filesystem) readonly on device 252:0. [ 0.731228] devtmpfs: mounted [ 0.731878] Freeing unused kernel memory: 800k freed [ 0.733335] Write protecting the kernel read-only data: 7808k init: plymouth-splash main process (261) terminated with status 2 init: plymouth main process (48) killed by SEGV signal cloud-init running: Sat, 12 Feb 2011 00:13:30 +0000. up 2.53 seconds waiting for metadata service at http:\/\/169.254.169.254\/2009-04-04\/meta-data\/instance-id 00:13:32 [ 1\/100]: url error [timed out] 00:13:35 [ 2\/100]: url error [timed out] ...repeats 100 times...

Some details.

  1. The instance I'm running is based on the Ubuntu 10.04 server image from http://uec-images.ubuntu.com/releases/10.04/release/ubuntu-10.04-server-uec-amd64.tar.gz (http://uec-images.ubuntu.com/releases...)

  2. I've added the metadata service IP address to iptables on the CC with, sudo iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 68.77.36.184:8773

  3. Output from iptables-save looks like, *nat :PREROUTING ACCEPT [413:56592] :OUTPUT ACCEPT [278:17152] :POSTROUTING ACCEPT [297:19583] -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 68.77.36.184:8773

  4. If I try to ping the metadata service from the CC I get, user@ubuntu:~$ ping 169.254.169.254 PING 169.254.169.254 (169.254.169.254) 56(84) bytes of data. From 38.112.35.21 icmp_seq=1 Destination Host Unreachable From 38.112.35.21 icmp_seq=3 Destination Host Unreachable ...

I don't know where the 38.112.35.21 is coming from.

  1. I tried running an instance based on the Ubuntu 10.10 server image from http://uec-images.ubuntu.com/releases/10.10/release/ubuntu-10.10-server-uec-amd64.tar.gz (http://uec-images.ubuntu.com/releases...) and got the same results.

  2. I'm running nova on Ubuntu 10.10

Any thoughts as to why the metadata service is inaccessible?

Thanks, Everett

edit retag flag offensive close merge delete

6 answers

Sort by ยป oldest newest most voted
0

answered 2011-08-20 13:48:45 -0500

juanpm gravatar image

HI everyone, on last days I was reading some blogs, and doint some proof, and I solved my networks problems on single-host, one NIC, using a dummy network adapter. here my nova.conf and network configurations:

root:~# cat /etc/nova/nova.conf --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --verbose --s3_host=192.168.66.1 --rabbit_host=192.168.66.1 --cc_host=192.168.66.1 --network_host=192.168.66.1 --ec2_url=http://192.168.66.1:8773/services/Cloud --fixed_range=192.168.0.0/24 --network_size=65534 --FAKE_subdomain=ec2 --routing_source_ip=188.138.101.59 --verbose --sql_connection=mysql://root:nova@192.168.66.1/nova --network_manager=nova.network.manager.FlatDHCPManager --flat_interface=dummy0 --public_interface=eth0

root:~# ip addr 1: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 169.254.169.254/32 scope link lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:19:99:9a:9c:04 brd ff:ff:ff:ff:ff:ff inet 188.138.101.59/24 brd 188.138.101.255 scope global eth0 inet 188.138.99.184/32 scope global eth0 inet6 fe80::219:99ff:fe9a:9c04/64 scope link valid_lft forever preferred_lft forever 3: br100: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether 5e:76:28:79:a3:1b brd ff:ff:ff:ff:ff:ff inet 192.168.66.1/24 brd 192.168.66.255 scope global br100 inet 192.168.0.1/25 brd 192.168.0.127 scope global br100 inet6 fe80::484:21ff:fecf:d1ed/64 scope link valid_lft forever preferred_lft forever 4: virbr0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether 1e:4e:2c:3a:16:f9 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 5: dummy0: <broadcast,noarp,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether 5e:76:28:79:a3:1b brd ff:ff:ff:ff:ff:ff inet6 fe80::5c76:28ff:fe79:a31b/64 scope link valid_lft forever preferred_lft forever

edit flag offensive delete link more
0

answered 2011-02-23 09:35:54 -0500

berendt gravatar image
Metadata forwarding must be handled by the gateway, and since nova does
not do any setup in this mode, it must be done manually.  Requests to
169.254.169.254 port 80 will need to be forwarded to the api server.

There is following method in nova/network/linux_net.py, but it's only called while using Flat DHCP Manager or VLAN Manager.

def metadata_forward(): """Create forwarding rule for metadata""" _confirm_rule("PREROUTING", "-t nat -s 0.0.0.0/0 " "-d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT " "--to-destination %s:%s" % (FLAGS.ec2_dmz_host, FLAGS.ec2_port))

edit flag offensive delete link more
0

answered 2011-02-14 22:33:15 -0500

everett-toews gravatar image

If I run the command, sudo ip addr add 169.254.169.254/32 scope link dev eth0

Then I can at least curl 169.254.169.254, user@ubuntu:~$ curl http://169.254.169.254:8773/ 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 2009-04-04

That doesn't fix the problem for my instances but maybe it's a start...

edit flag offensive delete link more
0

answered 2011-08-13 04:14:59 -0500

juanpm gravatar image

Hello everyone, I've got the same problem than Richard. I'm trying (hard) to setup a Nova on Single Node, with just one NIC.

I can create, and run instances, but: - I'm not able to ping this instances from Cloud Controller / NOva host - This instances consoles shows "DataSourceEc2.py[WARNING]: 10:04:11 [ 1/100]: url error [[Errno 113] No route to host]" errors

Below, my configurations:

mgr01:~# cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.10 DISTRIB_CODENAME=maverick DISTRIB_DESCRIPTION="Ubuntu 10.10"

==============================================================================

mgr01:~# cat /etc/nova/nova.conf --dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --verbose --s3_host=188.138.101.59 --rabbit_host=188.138.101.59 --cc_host=188.138.101.59 --ec2_url=http://188.138.101.59:8773/services/Cloud --fixed_range=192.168.0.0/24 --network_size=256 --FAKE_subdomain=ec2 --routing_source_ip=188.138.101.59 --verbose --sql_connection=mysql://root:pass@188.138.101.59/nova --network_manager=nova.network.manager.FlatDHCPManager --flat_network_dhcp_start=192.168.0.2 --flat_interface=eth0 --flat_injected=False --public_interface=eth0

============================================================================== mgr01:~# ip addr 1: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 169.254.169.254/32 scope link lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc mq state UP qlen 1000 link/ether 00:19:99:9a:9c:04 brd ff:ff:ff:ff:ff:ff inet6 fe80::219:99ff:fe9a:9c04/64 scope link valid_lft forever preferred_lft forever 3: br100: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:19:99:9a:9c:04 brd ff:ff:ff:ff:ff:ff inet 188.138.101.59/24 brd 188.138.101.255 scope global br100 inet 192.168.0.1/26 brd 192.168.0.63 scope global br100 inet6 fe80::219:99ff:fe9a:9c04/64 scope link valid_lft forever preferred_lft forever 4: virbr0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether e2:6b:78:fa:f5:8c brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

============================================================================== mgr01:~# nova-manage service list mgr01.cloudzen.com.ar nova-network enabled :-) 2011-08-13 03:56:12 mgr01.cloudzen.com.ar nova-compute enabled :-) 2011-08-13 03:56:12 mgr01.cloudzen.com.ar nova-scheduler enabled :-) 2011-08-13 03:56:10

============================================================================== mgr01:~# nova list +----+----------+--------+-----------+-------------+ | ID | Name | Status | Public IP | Private IP | +----+----------+--------+-----------+-------------+ | 1 | Server 1 | ACTIVE | | 192.168.0.2 | | 2 | Server 2 | ACTIVE | | 192.168.0.3 | | 3 | Server 3 | ACTIVE | | 192.168.0.4 | +----+----------+--------+-----------+-------------+

============================================================================== mgr01:~# iptables -t nat -L -v Chain PREROUTING (policy ACCEPT 1548 packets, 246K bytes) pkts bytes target prot opt in out source destination
1548 246K nova-compute-PREROUTING all -- any any anywhere anywhere
1548 246K nova-network-PREROUTING all -- any any anywhere anywhere

Chain POSTROUTING (policy ACCEPT 1543 packets, 243K bytes) pkts ... (more)

edit flag offensive delete link more
0

answered 2011-07-12 08:19:45 -0500

I meet the same issue in the single node installation with one NIC enabled. Further, I have only 10.x.x.x subnet, not 192.168.x.x subnet.we have one dhcp and gateway :10.0.0.1 and are not allow to have another dhcp server since they will conflict in the same network. how do I config openstack's network to avoid this issue in my case ? thanks a lot.

here is my nova.conf:

--dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --verbose --s3_host=10.101.1.142 --rabbit_host=10.101.1.142 --cc_host=10.101.1.142 --ec2_url=http://10.101.1.142:8773/services/Cloud --fixed_range=192.168.0.0/16 --network_size=64 --FAKE_subdomain=ec2 --routing_source_ip=10.101.1.142 --verbose --sql_connection=mysql://root:iforgot@10.101.1.142/nova

--network_manager=nova.network.manager.FlatManager

the error message of console log is below:

2011-07-11 10:04:11,496 - DataSourceEc2.py[WARNING]: waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id (http://169.254.169.254/2009-04-04/met...)

2011-07-11 10:04:11,498 - DataSourceEc2.py[WARNING]: 10:04:11 [ 1/100]: url error [[Errno 113] No route to host]

edit flag offensive delete link more
0

answered 2011-02-23 16:51:53 -0500

everett-toews gravatar image

This problem was fixed when I switch to FlatDHCPManager. See https://answers.launchpad.net/nova/+question/145820 (https://answers.launchpad.net/nova/+q...)

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2011-02-12 00:31:57 -0500

Seen: 885 times

Last updated: Aug 20 '11