Ask Your Question
2

Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed : Network is unreachable

asked 2014-11-08 09:20:04 -0500

Ashish Jain gravatar image

updated 2014-11-09 09:39:28 -0500

I have 2 servers both uses Openstack Icehouse installation on Ubuntu 14.04.1 System:

  • Server1 - Single node installation: Having controller, compute and network node in one system

  • Server2 - Compute node: Only compute service is running

I'm able to launch Ubuntu 14.04 cloud image on Server1 successfully. But when i launch Ubuntu VM on Server2 it gives:

Calling 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [46/120s]: request error [HTTPConnectionPool(host='169.254.169.254', port=80): Max retries exceeded with url: /2009-04-04/meta-data/instance-id (Caused by <class 'socket.error'="">: [Errno 101] Network is unreachable)]

nova.conf on Server1:

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = openstack

my_ip = controller
vncserver_listen = controller
vncserver_proxyclient_address = controller
novncproxy_base_url = http://10.1.12.2:6080/vnc_auto.html

auth_strategy = keystone

glance_host = controller

network_api_class = nova.network.neutronv2.api.API
neutron_url = http://controller:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = openstack
neutron_admin_auth_url = http://controller:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

metadata_host=10.1.12.2
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = openstack

[database]
connection = mysql://nova:openstack@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = openstack

nova.conf on Server2:

[DEFAULT]
dhcpbridge_flagfile=/etc/nova/nova.conf
dhcpbridge=/usr/bin/nova-dhcpbridge
logdir=/var/log/nova
state_path=/var/lib/nova
lock_path=/var/lock/nova
force_dhcp_release=True
iscsi_helper=tgtadm
libvirt_use_virtio_for_bridges=True
connection_type=libvirt
root_helper=sudo nova-rootwrap /etc/nova/rootwrap.conf
verbose=True
ec2_private_dns_show_ip=True
api_paste_config=/etc/nova/api-paste.ini
volumes_path=/var/lib/nova/volumes
enabled_apis=ec2,osapi_compute,metadata

auth_strategy=keystone

rpc_backend = rabbit
rabbit_host = controller
rabbit_password = openstack

my_ip = 10.1.12.17
vnc_enabled = True
vncserver_listen = 10.1.12.17
vncserver_proxyclient_address = 10.1.12.17
novncproxy_base_url = http://controller:6080/vnc_auto.html

glance_host = controller

network_api_class = nova.network.neutronv2.api.API
neutron_url = http://controller:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = openstack
neutron_admin_auth_url = http://controller:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron

metadata_host = 10.1.12.2
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = openstack

[database]
connection = mysql://nova:openstack@controller/nova

[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = nova
admin_password = openstack

ovs-vsctl show on compute node:

Bridge br-int
        fail_mode: secure
        Port "int-br-eth0"
            Interface "int-br-eth0"
        Port "qvo454b3c53-1f"
            tag: 3
            Interface "qvo454b3c53-1f"
        Port br-int
            Interface br-int
                type: internal
    Bridge "br-eth0"
        Port "eth0"
            Interface "eth0"
        Port "br-eth0"
            Interface "br-eth0"
                type: internal
        Port "phy-br-eth0"
            Interface "phy-br-eth0"
    ovs_version: "2.0.2"

route -n on controller:

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.1.12.1       0.0.0.0         UG    0      0        0 br-ex
10.1.12.0       0.0.0.0         255 ...
(more)
edit retag flag offensive close merge delete

Comments

Please , post ovs-vsctl show on Compute Node and route -n ,ip netns on Controller Node

dbaxps gravatar imagedbaxps ( 2014-11-08 11:00:25 -0500 )edit
dbaxps gravatar imagedbaxps ( 2014-11-08 11:04:23 -0500 )edit

I have edited my question and added ovs-vsctl show on compute node, basically i have 2 bridges: br-int and br-eth0.

sudo netstat -lntp | grep 8775
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      13385/python
Ashish Jain gravatar imageAshish Jain ( 2014-11-08 21:55:55 -0500 )edit

Are you running nova-network ? ( not neutron)
Routing table on controller is not correct

dbaxps gravatar imagedbaxps ( 2014-11-08 23:55:48 -0500 )edit

I'm running neutron and not nova-network and routing table is correct right... 10.1.12.2 is the Server1 ip address and 10.1.12.1 is gateway. Note: I'm able to launch Ubuntu 14.04 Cloud image/ Cirros on 10.1.12.2(all in one node) but on 10.1.12.17(Server2 - only compute node) Ubuntu have probs.

Ashish Jain gravatar imageAshish Jain ( 2014-11-09 00:05:32 -0500 )edit

3 answers

Sort by ยป oldest newest most voted
3

answered 2014-11-13 11:00:25 -0500

Ashish Jain gravatar image

updated 2014-11-13 11:03:23 -0500

Everything is working fine now!!! There were two main problems:

1 - The switch to which my servers were connected was dropping packets (because of VLAN tags).

* I changed the switch which can handle VLAN tagged packets 
* Configured network interfaces for VLAN support.
           https://wiki.ubuntu.com/vlan

2 - Misconfigured VM-data traffic!

  • On Server1:

     Created br-eth1(port eth1) and br-ex(port eth0):
        *  br-eth1: Management+VM-data network
             +  This interface is used when VMs send DHCP request
        * br-ex: External Network
             +  This interface is used when VMs access http://169.254.169.254/....
    
  • On Server2:

     Created br-eth1(port eth1):
        *  br-eth1: Management+VM-data network
    
edit flag offensive delete link more
1

answered 2014-11-09 01:02:58 -0500

dbaxps gravatar image

updated 2014-11-09 07:34:49 -0500

Forced to use answer field due to formatting options:-
Please , view a sample on Network Node :-
[root@juno1 ~(keystone_admin)]# route -n Kernel IP routing table

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 br-ex
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 enp2s0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 enp5s1
169.254.0.0     0.0.0.0         255.255.0.0     U     1006   0        0 br-ex
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 enp5s1
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 br-ex
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

On Compute Node

[root@juno2 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 enp2s0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 enp2s0
169.254.0.0     0.0.0.0         255.255.0.0     U     1003   0        0 enp5s1
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 enp5s1
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 enp2s0

You are missing routes to 169.254.0.0 and hence your access to nova-metadata server is broken Please run:

$ ip netns exec qrouter-cd88cf9f-753e-4941-8234-9edcd9b2363a iptables -S -t nat | grep 169.254
$ ip netns exec qrouter-cd88cf9f-753e-4941-8234-9edcd9b2363a netstat -anpt

If it gives you PID XXXX of process running on port 9697, then run :- $ ps -ef | grep XXXX

You wrote in question :-

sudo netstat -lntp | grep 8775
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      13385/python

I would expect 13385 to be a PID of nova-api. On my Controller:-

[root@juno1 ~(keystone_admin)]# netstat -lntp | grep 8775
tcp        0      0 0.0.0.0:8775            0.0.0.0:*               LISTEN      1646/python         
[root@juno1 ~(keystone_admin)]# ps -ef | grep 1646
nova      1646     1  0 08:42 ?        00:00:42 /usr/bin/python /usr/bin/nova-api
nova      3907  1646  0 08:43 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      3908  1646  0 08:43 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      4008  1646  0 08:43 ?        00:00:03 /usr/bin/python /usr/bin/nova-api
nova      4009  1646  0 08:43 ?        00:00:03 /usr/bin/python /usr/bin/nova-api
nova      4075  1646  0 08:43 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
nova      4076  1646  0 08:43 ?        00:00:00 /usr/bin/python /usr/bin/nova-api
root     32351  6546  0 10:37 pts/0    00:00:00 grep --color=auto 1646

In my case ... (more)

edit flag offensive delete link more

Comments

sudo ip netns exec qrouter-cdbf3d85-fa3d-49ab-81c1-14749082beaa iptables -S -t nat | grep 169.254

-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
Ashish Jain gravatar imageAshish Jain ( 2014-11-09 01:08:19 -0500 )edit

sudo ip netns exec qrouter-cdbf3d85-fa3d-49ab-81c1-14749082beaa netstat -anpt

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:9697            0.0.0.0:*               LISTEN      6457/python
Ashish Jain gravatar imageAshish Jain ( 2014-11-09 01:08:50 -0500 )edit

I have added D-NAT rule:

iptables -t nat -A OUTPUT -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.1.12.2:8775

I'm able to curl to http://169.254.169.254 . Things work on Server1 (Controller+Compute+Network) but not on Server2(Compute)

Ashish Jain gravatar imageAshish Jain ( 2014-11-09 01:13:16 -0500 )edit

Please, run

ps -ef | grep 13385
ps -ef | grep 6457
dbaxps gravatar imagedbaxps ( 2014-11-09 01:25:10 -0500 )edit

ps -ef | grep 13385

ashish   19919 20324  0 13:05 pts/4    00:00:00 grep --color=auto 13385
Ashish Jain gravatar imageAshish Jain ( 2014-11-09 01:33:15 -0500 )edit
0

answered 2015-06-10 06:16:10 -0500

Sushitha gravatar image

I would like to add one more insight to this metadata server reachability issue.

At the time of booting an instance, it sends arp packets to figure out who has the gateway IP and who has the dhcp server IP. When the arp request is sent from the VM instance, the router should respond back with the mac address. So we have to ensure that the interface IP of neutron namespace router (which is the gateway for the VMs) and the IP of the physical interface that is pointing to the private network of VMs, are different.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-11-08 09:20:04 -0500

Seen: 15,722 times

Last updated: Nov 13 '14