Ask Your Question
0

cannot access VM from outside on its floating IP [closed]

asked 2016-06-03 00:14:44 -0500

laurentiusoica gravatar image

updated 2016-06-05 01:03:53 -0500

Hello,

I have an openstack libery setup with TripleO. 1 controller node and 2 compute nodes. All nodes are virtual machines on the same physical machine. I've created 2 instances with 1 private ip and 1 floating ip.

I can only access the instances from the controller node, under the router namespace only. I can't ping or ssh the instances from any other nodes. The instances can access themselves through both private and floating ips.

Private network : 192.168.100.0/24 External network: 10.8.75.0/24

From controller node:

ip netns exec qrouter-0f8a41e1-a8b9-4f2f-bb49-9168cec658eb ping 192.168.100.4
PING 192.168.100.4 (192.168.100.4) 56(84) bytes of data.
64 bytes from 192.168.100.4: icmp_seq=1 ttl=64 time=3.65 ms

ip netns exec qrouter-0f8a41e1-a8b9-4f2f-bb49-9168cec658eb ping 10.8.75.243
PING 10.8.75.243 (10.8.75.243) 56(84) bytes of data.
64 bytes from 10.8.75.243: icmp_seq=1 ttl=64 time=3.46 ms

neutron subnet-list|grep external

    | 35d6eb89-51b3-4de8-bbda-cd22db0855e7 | external | 10.8.75.0/24     | {"start": "10.8.75.241", "end": "10.8.75.254"}       |

neutron subnet-show 35d6eb89-51b3-4de8-bbda-cd22db0855e7

+-------------------+------------------------------------------------+
| Field             | Value                                          |
+-------------------+------------------------------------------------+
| allocation_pools  | {"start": "10.8.75.241", "end": "10.8.75.254"} |
| cidr              | 10.8.75.0/24                                   |
| dns_nameservers   | 192.168.88.15                                  |
| enable_dhcp       | True                                           |
| gateway_ip        | 10.8.75.1                                      |
| host_routes       |                                                |

From the physical machine (10.8.75.52)

ping 10.8.75.243

    PING 10.8.75.243 (10.8.75.243) 56(84) bytes of data.
    From 10.8.75.52 icmp_seq=1 Destination Host Unreachable

Any troubleshooting tips are appreciate. Thanks.

LE: On controller node

[root@overcloud-controller-0 nova]# ovs-vsctl show
83bba490-eb01-48f3-986c-f99281c8f5fc
    Bridge br-int
        fail_mode: secure
        Port "qr-a37855c8-f5"
            tag: 2
            Interface "qr-a37855c8-f5"
                type: internal
        Port "tapa4537e80-19"
            tag: 2
            Interface "tapa4537e80-19"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "tap2fcd8b13-9c"
            tag: 1
            Interface "tap2fcd8b13-9c"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port "qg-19c90a53-a9"
            Interface "qg-19c90a53-a9"
                type: internal
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-c0000208"
            Interface "vxlan-c0000208"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.0.2.9", out_key=flow, remote_ip="192.0.2.8"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.5.0"

[root@overcloud-controller-0 nova]# ifconfig
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.0.2.9  netmask 255.255.255.0  broadcast 192.0.2.255
        inet6 fe80::293:7cff:fedb:87ea  prefixlen 64  scopeid 0x20<link>
        ether 00:93:7c:db:87:ea  txqueuelen 0  (Ethernet)
        RX packets 200452  bytes 726858237 (693.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 361851  bytes 517205962 (493.2 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet6 fe80 ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by laurentiusoica
close date 2016-06-08 02:45:46.886848

Comments

Please

1. Log into Controller and post 'ovs-vsctl show`   && `ifconfig`
2. Log into Compute   and post 'ovs-vsctl show`   && `ifconfig`

As update 1 to question asked.

dbaxps gravatar imagedbaxps ( 2016-06-03 03:02:39 -0500 )edit

Post as update 2

3. `ifconfig` && `route -n` on undercloud
4. `route -n` on Controlerr
dbaxps gravatar imagedbaxps ( 2016-06-03 03:39:29 -0500 )edit

Updated the question with outputs.

laurentiusoica gravatar imagelaurentiusoica ( 2016-06-04 15:01:21 -0500 )edit

Your br-ex belongs 192.0.2.0/24 subnet hence it should match virtual external network with gateway 192.0.2.1. So managing this way you would have overcloud VMs accessible only from undercloud VM and outbound connectivity . All this reminds me TripleO Quickstart Setup .

dbaxps gravatar imagedbaxps ( 2016-06-04 16:30:25 -0500 )edit

My question is :- have you done TripleO QuickStart Setup or no ?

dbaxps gravatar imagedbaxps ( 2016-06-04 16:31:41 -0500 )edit

3 answers

Sort by ยป oldest newest most voted
1

answered 2016-06-06 08:35:24 -0500

laurentiusoica gravatar image

updated 2016-06-07 02:12:23 -0500

Followed the instructions from https://access.redhat.com/solutions/648863 (https://access.redhat.com/solutions/6...)

Now the instances are accessible outside the overcloud aswell.

LE: requested config details:

[heat-admin@overcloud-novacompute-0 ~]$ sudo cat /etc/neutron/l3_agent.ini | grep -v ^# | grep -v ^$
[DEFAULT]
[AGENT]
[heat-admin@overcloud-novacompute-0 ~]$ sudo cat /etc/neutron/plugins/ml2/openvswitch_agent.ini  | grep -v ^# | grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =192.0.2.8
bridge_mappings =datacentre:br-ex,physnet1:br-ext
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
extensions =qos
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[root@overcloud-controller-0 ~]# cat /etc/neutron/l3_agent.ini | grep -v ^# | grep -v ^$
[DEFAULT]
debug = False
interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver
ovs_use_veth = False
handle_internal_only_routers = True
external_network_bridge =
metadata_port = 9697
send_arp_for_ha = 3
periodic_interval = 40
periodic_fuzzy_delay = 5
enable_metadata_proxy = True
router_delete_namespaces = True
agent_mode = legacy
[AGENT]
[root@overcloud-controller-0 ~]# cat /etc/neutron/plugins/ml2/openvswitch_agent.ini  | grep -v ^# | grep -v ^$
[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =192.0.2.9
bridge_mappings =datacentre:br-ex,physnet1:br-ext
enable_tunneling=True
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
extensions =qos
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[root@overcloud-controller-0 ~]# ovs-vsctl show
83bba490-eb01-48f3-986c-f99281c8f5fc
    Bridge br-ext
        Port phy-br-ext
            Interface phy-br-ext
                type: patch
                options: {peer=int-br-ext}
        Port br-ext
            Interface br-ext
                type: internal
        Port "ens7"
            Interface "ens7"
    Bridge br-int
        fail_mode: secure
        Port br-int
            Interface br-int
                type: internal
        Port "qg-7ab64067-19"
            tag: 3
            Interface "qg-7ab64067-19"
                type: internal
        Port int-br-ext
            Interface int-br-ext
                type: patch
                options: {peer=phy-br-ext}
        Port "qr-a463dbc5-dd"
            tag: 1
            Interface "qr-a463dbc5-dd"
                type: internal
        Port "tap611e1a4a-a9"
            tag: 1
            Interface "tap611e1a4a-a9"
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port int-br-ex
            Interface int-br-ex
                type: patch
                options: {peer=phy-br-ex}
    Bridge br-ex
        Port "eth0"
            Interface "eth0"
        Port phy-br-ex
            Interface phy-br-ex
                type: patch
                options: {peer=int-br-ex}
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        fail_mode: secure
        Port "vxlan-c0000208"
            Interface "vxlan-c0000208"
                type: vxlan
                options: {df_default="true", in_key=flow, local_ip="192.0.2.9", out_key=flow, remote_ip="192.0.2.8"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
    ovs_version: "2.5.0"
[root@overcloud-controller-0 ~]# ifconfig
br-ex: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.0.2.9  netmask 255.255.255.0  broadcast 192.0.2.255
        inet6 fe80::293:7cff:fedb:87ea  prefixlen 64  scopeid 0x20<link>
        ether 00:93:7c:db:87:ea  txqueuelen 0  (Ethernet)
        RX packets 502303  bytes 590484296 (563.1 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 857201  bytes 102417125 (97.6 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

br-ext: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.8.75.242  netmask 255.255.255.0  broadcast 10.8.75.255
        inet6 fe80::2c29:73ff:fe21:2941  prefixlen 64  scopeid 0x20<link>
        ether 2e:29:73:21:29:41  txqueuelen 0  (Ethernet)
        RX packets 70369  bytes 7646523 (7.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 102  bytes ...
(more)
edit flag offensive delete link more

Comments

Please, add to your your answer :-

$ cat /etc/neutron/l3_agent.ini | grep -v ^# | grep -v ^$
$ cat /etc/neutron/plugins/ml2/openvswitch_agent.ini  | grep -v ^# | grep -v ^$
dbaxps gravatar imagedbaxps ( 2016-06-06 13:10:58 -0500 )edit

Please add
ovs-vsctl show and ifconfig on overcloud Controller.

dbaxps gravatar imagedbaxps ( 2016-06-06 13:26:16 -0500 )edit

Done. Please note that the new bridge is br-ext, not br-ex (that was already there)

laurentiusoica gravatar imagelaurentiusoica ( 2016-06-07 02:13:36 -0500 )edit

Just one notice . In general bridge br-ext supporting flat network ( via provider ) is not supposed to have IP at all

dbaxps gravatar imagedbaxps ( 2016-06-07 05:58:59 -0500 )edit

Strange controller_0 is VM booted on AIO instance undercloud. Manually updating it's libvirt.xml profile of cloud Vm is not supposed to work since Juno I believe. And it doesn't work at least for myself. Stack's virsh editing controller_0 doesn't create eth1 on controller_0 for me.

dbaxps gravatar imagedbaxps ( 2016-06-07 06:56:52 -0500 )edit
1

answered 2016-06-05 01:28:25 -0500

dbaxps gravatar image

updated 2016-06-05 08:51:37 -0500

Per reports provided :-
1. Create external virtual network 192.0.2.0/24 with gateway 192.0.2.1
2. When created overcloud VMs will obtain FIPs on 192.0.2.0/24 . It will result :-

Availability VMs  from  undercloud 
Outbound connectivity for those VMs

Log into Controller && source file overcloudrc ( pick it up under /home/stack)

1. Delete old external && old private && old neutron router
2. create new as advised external shared
3. Create new tenant and user1 ( say user1 and ostenant , make user1 admin for ostenant)
4. Create keystoner_user1 having overcloud as sample
5. Source keystone_user1
6. Create new router && private network
and so on as usual

Would you like Dashboard GUI setup FoxyProxy per https://simplenfv.wordpress.com/2016/...
Per http://docs.openstack.org/developer/t...

image description

  On Snapshot above every overcolud node has 2 interfaces. The first one you have it's in Virtual Environment. It's colour is "Green". Consider colour "Yellow", Color "Yellow"  at least in Triple0 QuickStart is present only on UNDERCLOUD and nowhere else. Reports posted as UPDATEs to your answer are quite the same as TripleO QuickStart CURRENTLY does. Due to in meantime "TripleO  QuickStart" per my very personal opinion based
     on recent discussion @rdo Mailing list is a work in progress.

As of now I am using approach suggested in https://www.linux.com/blog/set-vm-con... using several VMs like RemoteConsole ( slang from posting ) for several users. Having corresponding ssh-keypairs copied each one to particular RemoteConsole(X) accessible by particular users

edit flag offensive delete link more

Comments

Could you please detail a bit? Where and how to create the external network? Please note that the VMs already have outbound connectivity. They can access Internet and 10.8.75.52 that is the physical machine.

laurentiusoica gravatar imagelaurentiusoica ( 2016-06-05 01:53:02 -0500 )edit

So now I should have 3 networks defined in neutron : 192.0.2.0/24 with GW 192.0.2.1 external shared 192.168.100.0/24 private network 10.8.75.0/24 external shared for floating IPs in order to access the VMs from 10.8.75.0/24

Is that right ? What gateway and what inerface should I define for router?

laurentiusoica gravatar imagelaurentiusoica ( 2016-06-05 04:20:01 -0500 )edit

I guess , that you don't need "10.8.75.0/24 external shared for floating IPs in order to access the VMs from 10.8.75.0/24 " . By design been implemented :- FIPs belong 192.0.2.0/24. It appears that external and VTEPs subnet are the same. If I am missing something in your logic, please, clarify

dbaxps gravatar imagedbaxps ( 2016-06-05 04:41:09 -0500 )edit

Would your ifconfigs have one more interface eth1 it would make sense to create one more external network. You have just one eth0 and it is port of br-ex (OVS external bridge), which appears to sit on 192.0.2.0/24. How you intend attach external2 to VM wth just one eth0 which is already OVS port ?

dbaxps gravatar imagedbaxps ( 2016-06-05 04:48:35 -0500 )edit

192.0.2.0/24 is no visible from 10.8.75.0/24. I think it is the network created on the undercloud (named ctlplane) by neutron for controller/compute nodes.

laurentiusoica gravatar imagelaurentiusoica ( 2016-06-05 04:53:20 -0500 )edit
0

answered 2016-06-04 23:55:03 -0500

jbbroccard gravatar image

Sounds like the typical secgroup issue: have you checked default security groups to let icmp/ssh traffic?

See: http://docs.openstack.org/user-guide/...

edit flag offensive delete link more

Comments

I did. Updated the question with the default sec group.

laurentiusoica gravatar imagelaurentiusoica ( 2016-06-05 00:20:12 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2016-06-03 00:14:44 -0500

Seen: 2,668 times

Last updated: Jun 07 '16