Ask Your Question
1

Instances cannot get outbound and/or inbound connectivity [closed]

asked 2013-11-23 15:54:55 -0600

desalvo gravatar image

updated 2013-11-23 15:55:43 -0600

Hi,

I'm trying to run an instance in the internal network on openstack havana. My test setup is made of a controller machine and a compute node, both having eth0 connected to the public network and eth1 in promisuous mode without any IP. I have successfully setup neutron with gre tunnels, the instance itself does get its internal address via dhcp, but then it has no access to the outside network. The same is also true for the incoming connection, so I cannot ssh to it too.

I guess there is something wrong with my network setup, but I do not know where. My network settings are the following:

# neutron net-list
+--------------------------------------+----------+------------------------------------------------------+
| id                                   | name     | subnets                                              |
+--------------------------------------+----------+------------------------------------------------------+
| f96f473f-d31b-41d7-8d4f-c72132799816 | internal | 9814f71d-4db0-4b72-a62d-c31944817d6b 10.0.1.0/24     |
| d56aea09-9bd2-41c3-bb1f-c8ad03e0f24e | external | 6d4085af-61e8-421a-9ff0-30b5787f63b3 141.108.38.0/23 |
+--------------------------------------+----------+------------------------------------------------------+

# neutron subnet-list
+--------------------------------------+------------+-----------------+------------------------------------------------------+
| id                                   | name       | cidr            | allocation_pools                                     |
+--------------------------------------+------------+-----------------+------------------------------------------------------+
| 9814f71d-4db0-4b72-a62d-c31944817d6b | t2atlas-in | 10.0.1.0/24     | {"start": "10.0.1.2", "end": "10.0.1.254"}           |
| 6d4085af-61e8-421a-9ff0-30b5787f63b3 | t2atlas-ex | 141.108.38.0/23 | {"start": "141.108.38.90", "end": "141.108.38.90"}   |
|                                      |            |                 | {"start": "141.108.38.169", "end": "141.108.38.169"} |
|                                      |            |                 | {"start": "141.108.39.92", "end": "141.108.39.94"}   |
+--------------------------------------+------------+-----------------+------------------------------------------------------+

# neutron router-list
+--------------------------------------+---------+-----------------------------------------------------------------------------+
| id                                   | name    | external_gateway_info                                                       |
+--------------------------------------+---------+-----------------------------------------------------------------------------+
| be94966e-9853-4c78-a463-5ed32ff7108f | router1 | {"network_id": "d56aea09-9bd2-41c3-bb1f-c8ad03e0f24e", "enable_snat": true} |
+--------------------------------------+---------+-----------------------------------------------------------------------------+

# neutron router-port-list router1
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                            |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| 67f1e9f4-0370-4218-8dff-c4b250b9c33c |      | fa:16:3e:d7:fa:47 | {"subnet_id": "9814f71d-4db0-4b72-a62d-c31944817d6b", "ip_address": "10.0.1.1"}      |
| 47f880a3-e8d1-4eee-817f-71909c7145bc |      | fa:16:3e:9b:42:16 | {"subnet_id": "6d4085af-61e8-421a-9ff0-30b5787f63b3", "ip_address": "141.108.38.90"} |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+

# neutron port-list
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                            |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+
| 67f1e9f4-0370-4218-8dff-c4b250b9c33c |      | fa:16:3e:d7:fa:47 | {"subnet_id": "9814f71d-4db0-4b72-a62d-c31944817d6b", "ip_address": "10.0.1.1"}      |
| 47f880a3-e8d1-4eee-817f-71909c7145bc |      | fa:16:3e:9b:42:16 | {"subnet_id": "6d4085af-61e8-421a-9ff0-30b5787f63b3", "ip_address": "141.108.38.90"} |
| 22b48142-77b2-443c-9b03-37cf6a7e8b02 |      | fa:16:3e:82:09:1a | {"subnet_id": "9814f71d-4db0-4b72-a62d-c31944817d6b", "ip_address": "10.0.1.2"}      |
| da32577d-cd87-4472-b428-c5ad47a06a79 |      | fa:16:3e:1e:0e:e2 | {"subnet_id": "9814f71d-4db0-4b72-a62d-c31944817d6b", "ip_address": "10.0.1.3"}      |
+--------------------------------------+------+-------------------+--------------------------------------------------------------------------------------+

Also, all the services and agents are up:

# neutron agent-list
+--------------------------------------+--------------------+-----------------------------------+-------+----------------+
| id                                   | agent_type         | host                              | alive | admin_state_up |
+--------------------------------------+--------------------+-----------------------------------+-------+----------------+
| 36ace84f-ae96-4af8-b2b8-17132ac10fe2 | Open vSwitch agent | atlas-cloud-fe-test.roma1.infn.it | :-)   | True           |
| ae2551ec-00e2-4db6-be3b-532a96ee1c16 | DHCP agent         | atlas-cloud-fe-test.roma1.infn.it | :-)   | True           |
| fc8cc6ec-66be-4f28-a392-0e5d0d64a8d4 | Open vSwitch agent | atlas-wn-323.roma1.infn.it        | :-)   | True           |
| 9a623a4b-c7ce-4504-9cb9-b06f95beaa27 | L3 agent           | atlas-cloud-fe-test.roma1.infn.it | :-)   | True           |
+--------------------------------------+--------------------+-----------------------------------+-------+----------------+

# nova service-list
+------------------+-----------------------------------+----------+---------+-------+----------------------------+-----------------+
| Binary           | Host                              | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+------------------+-----------------------------------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | atlas-cloud-fe-test.roma1.infn.it | internal | enabled | up    | 2013-11-23T21:52:16.000000 | None            |
| nova-scheduler   | atlas-cloud-fe-test.roma1.infn.it | internal | enabled | up    | 2013-11-23T21:52:16.000000 | None            |
| nova-conductor   | atlas-cloud-fe-test.roma1.infn.it | internal | enabled | up    | 2013-11-23T21:52:16.000000 | None            |
| nova-cert        | atlas-cloud-fe-test.roma1.infn.it | internal | enabled | up    | 2013-11-23T21:52:16.000000 | None            |
| nova-compute     | atlas-wn-323.roma1.infn.it        | nova     | enabled | up    | 2013-11-23T21:52:16.000000 | None            |
+------------------+-----------------------------------+----------+---------+-------+----------------------------+-----------------+

So, when I start an instance I see it's running:

# nova list
+--------------------------------------+--------+--------+------------+-------------+-------------------+
| ID                                   | Name   | Status | Task State | Power State | Networks          |
+--------------------------------------+--------+--------+------------+-------------+-------------------+
| f64d68e5-3665-41ea-abc5-efcd2a2b12a2 | testvm | ACTIVE | None       | Running     | internal=10.0.1.3 |
+--------------------------------------+--------+--------+------------+-------------+-------------------+

And in fact I also see the interface created, as shown before. But no externat connectivity is available from the VM and I cannot ssh to it from outside. I've also tried to assign a floating ip, but even that is not visible from outside ... (more)

edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by koolhead17
close date 2014-02-18 15:20:23.039482

3 answers

Sort by ยป oldest newest most voted
1

answered 2013-11-23 18:12:42 -0600

medhat gravatar image

eth1 on compute nodes and controller node shoud have ip address of any private network range of your choise please see link text and eth0 should be in promiscuous mode and you must set the newly created br-ex interface to have the IP address that formerly belonged to eth0 please see link text

edit flag offensive delete link more
0

answered 2013-11-24 04:17:22 -0600

medhat gravatar image

updated 2013-11-24 04:26:07 -0600

check this document it describes how metadata servic works link text

edit flag offensive delete link more

Comments

Thanks! Well, in fact I now see the problem. In the instance there is a route set at boot time, visible with route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.0.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0 0.0.0.0 10.0.1.1 0.0.0.0 UG 0 0 0 eth0 Removing the 169.254.0.0 route brings the metadata available to the instance, but at the next boot it reappears as broken. Need to figure out what is setting this route. I see the output of the routing in the console.log just after cloudinit starts but I'm not sure it is comin from it.

desalvo gravatar imagedesalvo ( 2013-11-24 05:03:25 -0600 )edit

apipa config set this route disable it by editing /etc/sysconfig/network and append NOZEROCONF=yes

medhat gravatar imagemedhat ( 2013-11-24 05:21:47 -0600 )edit

Indeed, I found this myself too in the meantime, in this bug https://bugzilla.redhat.com/show_bug.cgi?id=983611 I've tested it manually and it works, I'm rebuilding the image with oz now, I think it will work. Many thanks for all the help.

desalvo gravatar imagedesalvo ( 2013-11-24 05:26:48 -0600 )edit

I confirm it works recreating the images with NOZEROCONFIG=yes via oz. So, it's all working, at least for now! I think we may close this thread. Thanks again for the help!

desalvo gravatar imagedesalvo ( 2013-11-24 13:06:15 -0600 )edit
0

answered 2013-11-23 20:04:38 -0600

desalvo gravatar image

Actually my setup was really already similar to what you describe, it's just that the two eth0 and eth1 are inverted with respect to your schema and my eth0 has another public IP, instead of a private one.

In fact the solution was much simpler: the network was routed correctly, but the DNS was just set to the internal gateway, which was wrong, and so not resolving the external addresses. So I had to recreate my original internal subnet specifying the correct DNS and now it's working!

I still have one more problem: the metadata are not available to the instances. The service is working properly in the controller node, on port 8775, but cloudinit cannot access http://169.254.169.254 from the instance. Do you know where should I check? Again I do not see any strange message from the logs.

edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-11-23 15:54:55 -0600

Seen: 520 times

Last updated: Nov 24 '13