Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Can't reach metadata server from VM running in single instance setup

I've setup a single node Nova VM using these instructions from ilearnstack. The problem is that instances launched in this environment aren't able to access the metadata server. With CirrOS the error is,

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid
wget: can't connect to remote host (169.254.169.254): No route to host

The VM is successfully receiving an IP address.

Since this setup doesn't seem to run an L3 agent I tried setting enable_isolated_metadata = True in /etc/quantum/dhcp_agent.ini (as per this message). That didn't make any difference.

I'm guessing there's something fundamentally wrong with what I've done as I can't ping the VM from within the Nova VM either (I presume that should be possible?).

Some details about the environment,

The Nova VM has three NICs,

# Host-only network
auto eth0
iface eth0 inet static
address 10.10.100.51
netmask 255.255.255.0

# Internal network
auto eth1
iface eth1 inet static
address 192.168.20.10
netmask 255.255.255.0

# NAT network
auto eth2
iface eth2 inet dhcp

Other Details,

  • Using Linux Bridge
  • When I launched a VM the host node got 5 new network devices, ns-c056d062-7e (10.0.0.3), tap31db1cf1-05, tapc056d062-7e, brq08772967-38 (bridging the previous devices) and eth1.1000

Can't reach metadata server from VM running in single instance setup

I've setup a single node Nova VM using these instructions from ilearnstack. The problem is that instances launched in this environment aren't able to access the metadata server. With CirrOS the error is,

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid
wget: can't connect to remote host (169.254.169.254): No route to host

The VM is successfully receiving an IP address.

Since this setup doesn't seem to run an L3 agent I tried setting enable_isolated_metadata = True in /etc/quantum/dhcp_agent.ini (as per this message). That didn't make any difference.

I'm guessing there's something fundamentally wrong with what I've done as I can't ping the VM from within the Nova VM either (I presume that should be possible?).

Some details about the environment,

The Nova VM has three NICs,

# Host-only network
auto eth0
iface eth0 inet static
address 10.10.100.51
netmask 255.255.255.0

# Internal network
auto eth1
iface eth1 inet static
address 192.168.20.10
netmask 255.255.255.0

# NAT network
auto eth2
iface eth2 inet dhcp

Other Details,

  • Using Linux Bridge
  • When I launched a VM the host node got 5 new network devices, ns-c056d062-7e (10.0.0.3), tap31db1cf1-05, tapc056d062-7e, brq08772967-38 (bridging the previous devices) and eth1.1000

Routes on Host:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.4.2        0.0.0.0         UG    100    0        0 eth2
10.0.4.0        0.0.0.0         255.255.255.0   U     0      0        0 eth2
10.10.100.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

Can't reach metadata server from VM running in single instance setup

I've setup a single node Nova VM using these instructions from ilearnstack. The problem is that instances launched in this environment aren't able to access the metadata server. With CirrOS the error is,

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid
wget: can't connect to remote host (169.254.169.254): No route to host

The VM is successfully receiving an IP address.

Since this setup doesn't seem to run an L3 agent I tried setting enable_isolated_metadata = True in /etc/quantum/dhcp_agent.ini (as per this message). That didn't make any difference.

I'm guessing there's something fundamentally wrong with what I've done as I can't ping the VM from within the Nova VM either host (I presume that should be possible?).

Edit: some additional details and questions: Routing table included below. When I launch a VM I can't ping the VM from the host node (I've updated the security groups to allow ICMP messages). I'm using vlan tenant network types. An eth1.1000 device is created by quantum for this purpose. I was wondering though, will the L2 agent drop incoming messages on eth1 if they're not tagged with the 1000 vlan id (which presumably they won't if I'm pinging from the host)?

Some details about the environment,

The Nova VM has three NICs,

# Host-only network
auto eth0
iface eth0 inet static
address 10.10.100.51
netmask 255.255.255.0

# Internal network
auto eth1
iface eth1 inet static
address 192.168.20.10
netmask 255.255.255.0

# NAT network
auto eth2
iface eth2 inet dhcp

Other Details,

  • Using Linux Bridge
  • When I launched a VM the host node got 5 new network devices, ns-c056d062-7e (10.0.0.3), tap31db1cf1-05, tapc056d062-7e, brq08772967-38 (bridging the previous devices) and eth1.1000

Routes on Host:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.4.2        0.0.0.0         UG    100    0        0 eth2
10.0.4.0        0.0.0.0         255.255.255.0   U     0      0        0 eth2
10.10.100.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
click to hide/show revision 4
Added "Bridge Details" and routing table changes

Can't reach metadata server from VM running in single instance setup

I've setup a single node Nova VM using these instructions from ilearnstack. The problem is that instances launched in this environment aren't able to access the metadata server. With CirrOS the error is,

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid
wget: can't connect to remote host (169.254.169.254): No route to host

The VM is successfully receiving an IP address.

Since this setup doesn't seem to run an L3 agent I tried setting enable_isolated_metadata = True in /etc/quantum/dhcp_agent.ini (as per this message). That didn't make any difference.

I'm guessing there's something fundamentally wrong with what I've done as I can't ping the VM from within the host (I presume that should be possible?).

Edit: some additional details and questions: Routing table included below. When I launch a VM I can't ping the VM from the host node (I've updated the security groups to allow ICMP messages). I'm using vlan tenant network types. An eth1.1000 device is created by quantum for this purpose. I was wondering though, will the L2 agent drop incoming messages on eth1 if they're not tagged with the 1000 vlan id (which presumably they won't if I'm pinging from the host)?

Some details about the environment,

The Nova VM has three NICs,

# Host-only network
auto eth0
iface eth0 inet static
address 10.10.100.51
netmask 255.255.255.0

# Internal network
auto eth1
iface eth1 inet static
address 192.168.20.10
netmask 255.255.255.0

# NAT network
auto eth2
iface eth2 inet dhcp

Other Details,

  • Using Linux Bridge
  • When I launched a VM the host node got 5 new network devices, ns-c056d062-7e (10.0.0.3), tap31db1cf1-05, tapc056d062-7e, brq08772967-38 (bridging the previous devices) and eth1.1000

Routes on Host:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.4.2        0.0.0.0         UG    100    0        0 eth2
10.0.4.0        0.0.0.0         255.255.255.0   U     0      0        0 eth2
10.10.100.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

Bridge Details,

# brctl show
bridge name bridge id       STP enabled interfaces
brq08772967-38      8000.080027872ea9   no      eth1.1000
                        tap57bc3d92-15
                        tapc056d062-7e

This routing table looks wrong to me. The VM I created is attached to eth1 (well eth1.100). I guess this means I should have a route for the VM network (10.0.0.0/24) going to eth1. I created that route using route add -net 10.0.0.0/24 dev eth1, and tried pinging again. I still don't get a response. Using tcpdump -n -i eth1 I can see the arp requests. However there's no response,

14:35:43.815162 ARP, Request who-has 10.0.0.4 tell 192.168.20.10, length 28
14:35:44.811970 ARP, Request who-has 10.0.0.4 tell 192.168.20.10, length 28
14:35:45.811981 ARP, Request who-has 10.0.0.4 tell 192.168.20.10, length 28

Can't reach metadata server from VM running in single instance setup

I've setup a single node Nova VM using these instructions from ilearnstack. The problem is that instances launched in this environment aren't able to access the metadata server. With CirrOS the error is,

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid
wget: can't connect to remote host (169.254.169.254): No route to host

The VM is successfully receiving an IP address.

Since this setup doesn't seem to run an L3 agent I tried setting enable_isolated_metadata = True in /etc/quantum/dhcp_agent.ini (as per this message). That didn't make any difference.

I'm guessing there's something fundamentally wrong with what I've done as I can't ping the VM from within the host (I presume that should be possible?).

Edit: some additional details and questions: Routing table included below. When I launch a VM I can't ping the VM from the host node (I've updated the security groups to allow ICMP messages). I'm using vlan tenant network types. An eth1.1000 device is created by quantum for this purpose. I was wondering though, will the L2 agent drop incoming messages on eth1 if they're not tagged with the 1000 vlan id (which presumably they won't if I'm pinging from the host)?

Some details about the environment,

The Nova VM has three NICs,

# Host-only network
auto eth0
iface eth0 inet static
address 10.10.100.51
netmask 255.255.255.0

# Internal network
auto eth1
iface eth1 inet static
address 192.168.20.10
netmask 255.255.255.0

# NAT network
auto eth2
iface eth2 inet dhcp

Other Details,

  • Using Linux Bridge
  • When I launched a VM the host node got 5 new network devices, ns-c056d062-7e (10.0.0.3), tap31db1cf1-05, tapc056d062-7e, brq08772967-38 (bridging the previous devices) and eth1.1000

Routes on Host:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.4.2        0.0.0.0         UG    100    0        0 eth2
10.0.4.0        0.0.0.0         255.255.255.0   U     0      0        0 eth2
10.10.100.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

Bridge Details,

# brctl show
bridge name  bridge id        STP enabled  interfaces
brq08772967-38    8000.080027872ea9   no       eth1.1000
                         tap57bc3d92-15
                         tapc056d062-7e

Routes on Host:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.4.2        0.0.0.0         UG    100    0        0 eth2
10.0.4.0        0.0.0.0         255.255.255.0   U     0      0        0 eth2
10.10.100.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

This routing table looks wrong to me. The VM I created is attached to eth1 (well eth1.100). I guess this means I should have a route for the VM network (10.0.0.0/24) going to eth1. I created that route using route add -net 10.0.0.0/24 dev eth1, and tried pinging again. I still don't get a response. Using tcpdump -n -i eth1 I can see the arp requests. However there's no response,

14:35:43.815162 ARP, Request who-has 10.0.0.4 tell 192.168.20.10, length 28
14:35:44.811970 ARP, Request who-has 10.0.0.4 tell 192.168.20.10, length 28
14:35:45.811981 ARP, Request who-has 10.0.0.4 tell 192.168.20.10, length 28

Can't reach metadata server from VM running in single instance setup

I've setup a single node Nova VM using these instructions from ilearnstack. The problem is that instances launched in this environment aren't able to access the metadata server. With CirrOS the error is,

cloudsetup: checking http://169.254.169.254/20090404/metadata/instanceid
wget: can't connect to remote host (169.254.169.254): No route to host

The VM is successfully receiving an IP address.

Since this setup doesn't seem to run an L3 agent I tried setting enable_isolated_metadata = True in /etc/quantum/dhcp_agent.ini (as per this message). That didn't make any difference.

I'm guessing there's something fundamentally wrong with what I've done as I can't ping the VM from within the host (I presume that should be possible?).

Edit: some additional details and questions: Routing table included below. When I launch a VM I can't ping the VM from the host node (I've updated the security groups to allow ICMP messages). I'm using vlan tenant network types. An eth1.1000 device is created by quantum for this purpose. I was wondering though, will the L2 agent drop incoming messages on eth1 if they're not tagged with the 1000 vlan id (which presumably they won't if I'm pinging from the host)?

Some details about the environment,

The Nova VM has three NICs,

# Host-only network
auto eth0
iface eth0 inet static
address 10.10.100.51
netmask 255.255.255.0

# Internal network
auto eth1
iface eth1 inet static
address 192.168.20.10
netmask 255.255.255.0

# NAT network
auto eth2
iface eth2 inet dhcp

Other Details,

  • Using Linux Bridge
  • When I launched a VM the host node got 5 new network devices, ns-c056d062-7e (10.0.0.3), tap31db1cf1-05, tapc056d062-7e, brq08772967-38 (bridging the previous devices) and eth1.1000

Bridge Details,

# brctl show
bridge name      bridge id          STP enabled   interfaces
brq08772967-38   8000.080027872ea9  no            eth1.1000
                                                  tap57bc3d92-15
                                                  tapc056d062-7e

Routes on Host:

Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.0.4.2        0.0.0.0         UG    100    0        0 eth2
10.0.4.0        0.0.0.0         255.255.255.0   U     0      0        0 eth2
10.10.100.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.20.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0

This routing table looks wrong to me. The VM I created is attached to eth1 (well eth1.100). eth1.1000). I guess this means I should have a route for the VM network (10.0.0.0/24) going to eth1. I created that route using route add -net 10.0.0.0/24 dev eth1, and tried pinging again. I still don't get a response. Using tcpdump -n -i eth1 I can see the arp requests. However there's no response,

14:35:43.815162 ARP, Request who-has 10.0.0.4 tell 192.168.20.10, length 28
14:35:44.811970 ARP, Request who-has 10.0.0.4 tell 192.168.20.10, length 28
14:35:45.811981 ARP, Request who-has 10.0.0.4 tell 192.168.20.10, length 28