Ask Your Question
1

icehouse RHEL external port down| VMs with external IPS not accessible

asked 2014-04-24 18:57:47 -0500

dro gravatar image

updated 2014-04-25 14:29:19 -0500

I have several issues with my new installation. But ill add different questions in ask.openstack

I did not install openswitch plugin, i am using the ML2 plugin

The issue is networking with my new VMS. i have a demo network, demo subnet, demo router. The router has two interfaces, one for the internal network, the other for the external. I am able to ping the external port IP 172.16.11.10, but state says its DOWN, VM IPS 172.16.11.16 and 172.16.11.18, I am not able to ping any vas with public IPS on the 172.16.11.0/16 network. I have two VMS currently running.

Does the router saying its DOWN? Says its down, but i can ping 172.16.11.10 , but thats it.

 root@controller ~]# neutron port-show 82f675e4-3843-48b2-af99-da7d98139ae1
    +-----------------------+-------------------------------------------------------------------------------------+
    | Field                 | Value                                                                               |
    +-----------------------+-------------------------------------------------------------------------------------+
    | admin_state_up        | True                                                                                |
    | allowed_address_pairs |                                                                                     |
    | binding:host_id       | network                                                                             |
    | binding:profile       | {}                                                                                  |
    | binding:vif_details   | {"port_filter": true, "ovs_hybrid_plug": true}                                      |
    | binding:vif_type      | ovs                                                                                 |
    | binding:vnic_type     | normal                                                                              |
    | device_id             | dd6d5c92-e93a-4583-8a82-d45b728868c6                                                |
    | device_owner          | network:router_gateway                                                              |
    | extra_dhcp_opts       |                                                                                     |
    | fixed_ips             | {"subnet_id": "d09ccb0b-5e78-4f35-814e-d9298d9b9781", "ip_address": "172.16.11.10"} |
    | id                    | 82f675e4-3843-48b2-af99-da7d98139ae1                                                |
    | mac_address           | fa:16:3e:a4:3e:31                                                                   |
    | name                  |                                                                                     |
    | network_id            | abc33382-5f10-45a8-b9d6-1282f76dcedc                                                |
    | security_groups       |                                                                                     |
    | status                | DOWN                                                                                |
    | tenant_id             |                                                                                     |
    +-----------------------+-------------------------------------------------------------------------------------+
edit retag flag offensive close merge delete

Comments

1

Please, format your text.

dbaxps gravatar imagedbaxps ( 2014-04-25 01:32:48 -0500 )edit

I also met.if you use nova-network can be create instance ;

now,i don't know why use neutron produce error;; neutron agent-list alive is status xxx you can read ---> https://bugzilla.redhat.com/show_bug.cgi?id=1066519 (link text)

kernal gravatar imagekernal ( 2014-04-25 04:28:23 -0500 )edit

Please , reproduce this report for demo's qdhcp-xxxxxxxx namespace , sourcing demo's credentials.

Can you to run on your system
$ neutron net-list
$ netron   router-list
$ ip netns list
If "YES" then run
$ ip netns list | grep network_id ( for every net)
$ ip netns list | grep router_id ( for every router)
dbaxps gravatar imagedbaxps ( 2014-04-26 04:22:31 -0500 )edit

If you have not empty output for
$ ip netns list | grep router_id ( for any router)
Then run :-
$ ip netns exec qrouter-router-id iptables -S -t nat

dbaxps gravatar imagedbaxps ( 2014-04-26 08:20:37 -0500 )edit

I was able to get the output for ip netns list

[root@network ~]# ip netns list
qdhcp-952c9658-5405-44ce-978f-f55b99c629f1
qdhcp-a39656e3-b5a7-448e-8ec4-0e66b3b3e4e9
qrouter-6dcf46c4-690c-44cd-b17e-85652ce0e6b7
qdhcp-30222486-4282-4051-bf1b-1f7ff0833c97
qrouter-c9e4e2fa-211c-4178-b571-7976967b2f8d

and here is the output of the routers

[

root@network ~]# ip netns exec qrouter-c9e4e2fa-211c-4178-b571-7976967b2f8d iptables -S -t nat
-P PREROUTING ACCEPT
-P POSTROUTING ACCEPT
-P OUTPUT ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat
-N neutron-l3-agent-snat
-N neutron-postrouting-bottom
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A POSTROUTING -j neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A neutron-l3-agent-POSTROUTING ! -i qg-8b4cf2c0-3b ! -o qg-8b4cf2c0-3b -m conntrack ! --ctstate DNAT -j ACCEPT
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -s 192.168.1.0/24 -j SNAT --to-source 172.16.1.110
-A neutron-postrouting-bottom -j neutron-l3-agent-snat
[root@network ~]# ip netns exec qrouter-6dcf46c4-690c-44cd-b17e-85652ce0e6b7 iptables -S -t nat
-P PREROUTING ACCEPT
-P POSTROUTING ACCEPT
-P OUTPUT ACCEPT
-N neutron-l3-agent-OUTPUT ...
(more)
dro gravatar imagedro ( 2014-05-09 10:45:12 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-05-12 02:18:00 -0500

dbaxps gravatar image

updated 2014-05-12 02:19:23 -0500

1. I belive your OVS bridge br-ex is not configured properly. It's supposed to have eth0 as OVS port. ( not eth2). Ifconfig report shoud show br-ex with IP 172.16.1.211 and eth0 with no IP at all

2. I also belive that virtual public network has been built with mistake cidr - 172.16.0.0/16 shoud be 172.16.1.0/16 and --gateway 172.16.1.XX ( IP of your real gateway on this subnet ).

If you are getting my point you may work on your own, if not then ,please submit to me all what was requested above. I will try to have this job done for you. I think so because you are able to ping floating IP of your VM within qrouter namespace, what means that Neutron L2,L3 routing are in good shape. The problem seems to be in OVS configuration, which should be fixed first to proceed.
edit flag offensive delete link more

Comments

Thank, let me look into it.

dro gravatar imagedro ( 2014-05-12 09:50:41 -0500 )edit

Please, view my last comments. This blog is really good.

dbaxps gravatar imagedbaxps ( 2014-05-12 09:58:42 -0500 )edit
  1. I belive your OVS bridge br-ex is not configured properly. It's supposed to have eth0 as OVS port. ( not eth2). Ifconfig report shoud show br-ex with IP 172.16.1.211 and eth0 with no IP at all
  • Im following the instructions in the Openstack setup for ICEHOUSE. It says to add the bridge and use eht2

    ovs-vsctl add-br br-int

    ovs-vsctl add-br br-ex

    ovs-vsctl add-port br-ex eth2

when I try to add it , again, it says:

[root@network network-scripts]# ovs-vsctl add-port br-ex eth2
ovs-vsctl: cannot create a port named eth2 because a port named eth2 already exists on bridge br-ex
  1. when I added the public network 172.16.1.0/16, I used the cidr 172.16.1.0/16, but it defaults to 172.16.0.0/16. I don't think it matters, since I have servers on different networks on 172.16.x.x and they ...
(more)
dro gravatar imagedro ( 2014-05-15 15:49:12 -0500 )edit

so my netconfig is as follows

internal: 10.10.10.0/24 external: 172.16.1.0/16

see my network lists

+--------------------------------------+----------------+----------------------------------------------------+
| id                                   | name           | subnets                                            |
+--------------------------------------+----------------+----------------------------------------------------+
| 3cd0c43d-d35b-4a9e-bdec-e9487a5cd947 | network-public | 41e5d473-116a-49de-8c3f-6db6b99ac1a1 172.16.0.0/16 |
| ce922458-2c3b-47f3-a803-0e0bc5b8bcc6 | network-demo   | ee438124-dc63-4649-be48-514325d11c66 10.10.10.0/24 |
+--------------------------------------+----------------+----------------------------------------------------+
[root@network openstack_scripts]# neutron subnet-list
+--------------------------------------+---------------+---------------+------------------------------------------------+
| id                                   | name          | cidr          | allocation_pools                               |
+--------------------------------------+---------------+---------------+------------------------------------------------+
| 41e5d473-116a-49de-8c3f-6db6b99ac1a1 | subnet-public | 172.16.0.0/16 | {"start": "172.16.1.80", "end": "172.16.1.90"} |
| ee438124-dc63-4649-be48-514325d11c66 | subnet-demo   | 10.10.10.0/24 | {"start": "10.10.10.2", "end": "10.10.10.254"} |
+--------------------------------------+---------------+---------------+------------------------------------------------+
[root@network openstack_scripts]# neutron router-list
+--------------------------------------+-------------+-----------------------------------------------------------------------------+
| id                                   | name        | external_gateway_info                                                       |
+--------------------------------------+-------------+-----------------------------------------------------------------------------+
| 1cbefb18-e0ec-4e26-a9f7-4eb77ccd4830 | router-demo | {"network_id": "3cd0c43d-d35b-4a9e-bdec-e9487a5cd947", "enable_snat": true} |
+--------------------------------------+-------------+-----------------------------------------------------------------------------+

root@network openstack_scripts]# neutron router-port-list  router-demo
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                          |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| 08310df4-1a95-402a-92bb-afdb71490620 |      | fa:16:3e:dc:00:31 | {"subnet_id": "41e5d473-116a-49de-8c3f-6db6b99ac1a1", "ip_address": "172.16.1.80"} |
| 7bec64b9-4ef5-4aab-9c0b-565c2b7e9d5b |      | fa:16:3e:8a:5a:a6 | {"subnet_id": "ee438124-dc63-4649-be48-514325d11c66", "ip_address": "10.10.10.1"}  |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+

ports list

[root@network openstack_scripts]# neutron port-list
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| id                                   | name | mac_address       | fixed_ips                                                                          |
+--------------------------------------+------+-------------------+------------------------------------------------------------------------------------+
| 08310df4-1a95-402a-92bb-afdb71490620 |      | fa:16 ...
(more)
dro gravatar imagedro ( 2014-05-15 16:49:25 -0500 )edit

You wrote:-
when I added the public network 172.16.1.0/16, I used the cidr 172.16.1.0/16, but it defaults to 172.16.0.0/16. I don't think it matters, since I have servers on different networks on 172.16.x.x and they can all talk with each other.
Place servers on one public network and point to real gateway , when create it.
I think it does matter
IFCONFIG on Network node won't work. OVS bridge br-ex (with IP 172.16.1.37) should have OVS port eth0 ( no IP at all). Make it by hands like here http://bderzhavets.blogspot.com/2014/...

dbaxps gravatar imagedbaxps ( 2014-05-16 00:42:40 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-04-24 18:57:47 -0500

Seen: 1,864 times

Last updated: May 12 '14