169.254.169.254 is driven me insane

asked 2011-05-27 21:07:24 -0600

it-tech gravatar image

i all,

In a multinode environment has anyone succeed booting up images ( ubuntu - cloud init ) that request the metadata to the ip 169.254.169.254 ?

NOTE: instances booting on the controller node (that has also compute services running, boots up ok ... no luck getting instances running on the ADDITIONAL compute node :( )

  • Already tried -- iptables rule on the controller/network node -- ip addres on the loopback controller/network node -- ip address on the br100 controller/network node -- static route on the aditional compute node -- iptables rule on the aditional compute node -- ip address on the loopback on the aditional compute node -- ip address on the br100 on the aditional compute node

Still the same ugly message !

tcpdumping all over the network , the request is sent back from the network node repliing with the 169.254.169.254 but doesnt getting into the VM ?

The TCP reply came back from the controller ip with source port 8773, instead of 169.254.169.254 at port 80

Any clues of how to get this working ????

Best Regards

lele

edit retag flag offensive close merge delete

9 answers

Sort by ยป oldest newest most voted
0

answered 2011-05-27 22:01:04 -0600

it-tech gravatar image

We're in flatDHCP mode , with two bonded interfaces on the additional compute node

This is my net config, on the controller/network node :

1: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet 169.254.169.254/32 scope link lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc mq state UP qlen 1000 link/ether 68:b5:99:6b:47:80 brd ff:ff:ff:ff:ff:ff inet 172.16.144.5/32 scope global eth0 inet 172.16.144.19/32 scope global eth0 inet 172.16.144.20/32 scope global eth0 inet 172.16.144.21/32 scope global eth0 inet6 fe80::6ab5:99ff:fe6b:4780/64 scope link valid_lft forever preferred_lft forever 3: eth1: <broadcast,multicast> mtu 1500 qdisc noop state DOWN qlen 1000 link/ether 68:b5:99:6b:47:81 brd ff:ff:ff:ff:ff:ff 4: br100: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether 68:b5:99:6b:47:80 brd ff:ff:ff:ff:ff:ff inet 172.16.144.13/24 brd 172.16.144.255 scope global br100 inet 10.0.0.1/25 brd 10.0.0.127 scope global br100 inet6 fe80::6ab5:99ff:fe6b:4780/64 scope link valid_lft forever preferred_lft forever 5: virbr0: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether be:15:35:ee:7b:62 brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0

This is my ip config on the additional compute node

1: lo: <loopback,up,lower_up> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <broadcast,multicast,slave,up,lower_up> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether d4:85:64:6a:52:32 brd ff:ff:ff:ff:ff:ff 3: eth1: <broadcast,multicast,slave,up,lower_up> mtu 1500 qdisc mq master bond0 state UP qlen 1000 link/ether d4:85:64:6a:52:32 brd ff:ff:ff:ff:ff:ff 4: bond0: <broadcast,multicast,master,up,lower_up> mtu 1500 qdisc noqueue state UP link/ether d4:85:64:6a:52:32 brd ff:ff:ff:ff:ff:ff inet6 fe80::d685:64ff:fe6a:5232/64 scope link valid_lft forever preferred_lft forever 5: br100: <broadcast,multicast,up,lower_up> mtu 1500 qdisc noqueue state UNKNOWN link/ether d4:85:64:6a:52:32 brd ff:ff:ff:ff:ff:ff inet 172.16.144.12/24 brd 172.16.144.255 scope global br100 inet6 fe80::23:96ff:fe4a:fdd8/64 scope link ... (more)

edit flag offensive delete link more
0

answered 2011-06-09 22:55:29 -0600

it-tech gravatar image

Solved already !

edit flag offensive delete link more
0

answered 2011-12-19 09:22:03 -0600

emilienm gravatar image

I have the same problem.

I run a Multi-Node with StackOps, with 4 servers : controler, network, volume and compute.

There is no PREROUTING rule on the compute Server, but i still have the issue :

waiting for metadata service at http://169.254.169.254/2009-04-04/meta-data/instance-id (http://169.254.169.254/2009-04-04/met...) 09:01:08 [ 1/100]: url error [timed out] 09:01:09 [ 2/100]: url error [[Errno 113] No route to host]

When i lauch a VM.

On the Network server, i did :

1) Rename /etc/init/nova-api.conf.disable to /etc/init/nova-api.conf

2) Modify the file and change the line NOVA_CONF=/etc/nova/nova-controller.conf to NOVA_CONF=/etc/nova/nova-network.conf

3) Open the file /etc/nova/nova-network.conf and modify the parameters: --ec2_dmz_host=$NOVA_NETWORK_LOCAL_IP --ec2_host=$NOVA_NETWORK_LOCAL_IP and add this new parameter: --my_ip=$NOVA_NETWORK_LOCAL_IP

4) Start nova-api: service nova-api start

5) Restart nova-network: service nova-network stop; service nova-network start

6) Check that the iptables rule for 169.256.169.256 points to $NOVA_NETWORK_LOCAL_IP

iptables-save

-A nova-network-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.15.10.130:8773

And it does not work anymore !!

My public network is 10.25.0.0/16 (Intranet) And py private network is 192.168.0.0/24

If you need more informations, just tell me.

Thank's !

Regards,

Emilien

edit flag offensive delete link more
0

answered 2011-06-16 21:27:59 -0600

graham-hemingway gravatar image

I have this same issue. I have tried adding above line to my nova-compute node, but the rule never gets hit. Instead it sends the request on to the nova-network machine. Any help would be appreciated.

Cheers, Graham

edit flag offensive delete link more
0

answered 2011-05-27 21:17:49 -0600

vishvananda gravatar image

Are you in flatdhcp mode with one interface? If so you need to put br100 on network node into promisc mode.

Otherwise, there may be a subnet conflict between host and guest ips, I would check values of: --my_ip --fixed_range

Vish

On May 27, 2011, at 2:11 PM, Melicloud wrote:

New question #159317 on OpenStack Compute (nova): https://answers.launchpad.net/nova/+q...

i all,

In a multinode environment has anyone succeed booting up images ( ubuntu - cloud init ) that request the metadata to the ip 169.254.169.254 ?

NOTE: instances booting on the controller node (that has also compute services running, boots up ok ... no luck getting instances running on the ADDITIONAL compute node :( )

  • Already tried -- iptables rule on the controller/network node -- ip addres on the loopback controller/network node -- ip address on the br100 controller/network node -- static route on the aditional compute node -- iptables rule on the aditional compute node -- ip address on the loopback on the aditional compute node -- ip address on the br100 on the aditional compute node

Still the same ugly message !

tcpdumping all over the network , the request is sent back from the network node repliing with the 169.254.169.254 but doesnt getting into the VM ?

The TCP reply came back from the controller ip with source port 8773, instead of 169.254.169.254 at port 80

Any clues of how to get this working ????

Best Regards

lele

You received this question notification because you are a member of Nova Core, which is an answer contact for OpenStack Compute (nova).

edit flag offensive delete link more
0

answered 2011-06-10 14:32:55 -0600

everett-toews gravatar image

Great to hear that you solved it. Can you include your solution please?

Everett

On Thu, Jun 9, 2011 at 4:55 PM, Melicloud < question159317@answers.launchpad.net > wrote:

Question #159317 on OpenStack Compute (nova) changed: https://answers.launchpad.net/nova/+question/159317 (https://answers.launchpad.net/nova/+q...)

Status: Open => Solved

Melicloud confirmed that the question is solved: Solved already !


You received this question notification because you are an answer contact for OpenStack Compute (nova).

edit flag offensive delete link more
0

answered 2011-06-10 15:28:15 -0600

it-tech gravatar image

The thing was ... you have to add THIS LINE to every compute node.

iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination $CONTROLLER:8773

and MAKE SURE that the instances have a VALID VLAN ip address, that was the key for us, if it has another address, that the address that the compute / controller has, the metadata never worked.

We used FlatNetwork for that. Hope it helps.

Regards.

edit flag offensive delete link more
0

answered 2011-09-06 20:31:38 -0600

Hi Graham, I suppose you already solve your problem. Just in case, maybe this post could be useful for you: http://hugokuo-hugo.blogspot.com/2011/05/prerouting-169254169254-should-not-be.html (http://hugokuo-hugo.blogspot.com/2011...) . According that, in a multinode deployment, the iptables prerouting rule must NOT be included in each compute node.

Regards, Leandro

edit flag offensive delete link more
0

answered 2011-09-07 15:16:15 -0600

graham-hemingway gravatar image

Leandro,

We did have to put the PREROUTING rule into iptables on all of our compute nodes. It looks like this:

-A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.50.200:8773 -A OUTPUT -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 192.168.50.200:8773

Just for good measure we also put in the OUTPUT rule too. This solved our problem and we have not had any issues since. Cheers, Graham

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2011-05-27 21:07:24 -0600

Seen: 258 times

Last updated: Dec 19 '11