Ask Your Question
0

Icehouse Instance cannot ping Compute node

asked 2014-06-01 21:27:13 -0500

ed gravatar image

updated 2014-06-01 21:32:45 -0500

I'm running Icehouse on 3 nodes (Controller, Compute and Network (Neutron)) each on 3 different Ubuntu 14.04 servers and 1 instance running cirros.

From the controller and network nodes I can ping the external router address and the instance floating IP. From the instance I can ping both the controller and the network nodes mgmt IPs as well as the router. However from the compute node I cannot ping the router nor the floating IP and from the instance I cannot ping the compute mgmt IP.

I can SSH to the instance from another computer on my network however the instance itself cannot resolve any DNS names or access the internet:

$ ssh cirros@192.168.1.201
cirros@192.168.1.201's password: 
$ nslookup www.google.com
Server:    172.21.21.2
Address 1: 172.21.21.2 host-172-21-21-2.openstacklocal

nslookup: can't resolve 'www.google.com'
$ ping -c 4 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
^C
--- 8.8.8.8 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss

I also get PORT/LINK DOWN on br-int on the compute node:

root@stack-compute:~# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:00006677f5503e4a
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(patch-tun): addr:5a:79:46:d9:cd:5f
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 2(qvofaa9d401-aa): addr:5a:aa:ef:d4:84:6b
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:56:24:3f:87:6d:27
     config:     PORT_DOWN
     state:      LINK_DOWN
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0

When I execute on the compute node:

service openvswitch-switch restart

The link changes to UP however the instance loses all connectivity to anything on the network, including the servers it could talk to before.

This is var/log/neutron/openvswitch-agent.log on the compute node:

2014-06-02 11:49:43.113 1938 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-fbc17718-4980-4cd6-8a3f-a7570aa769ba None] Agent initialized successfully, now running...
2014-06-02 11:49:43.118 1938 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-fbc17718-4980-4cd6-8a3f-a7570aa769ba None] Agent out of sync with plugin!
2014-06-02 11:49:43.120 1938 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-fbc17718-4980-4cd6-8a3f-a7570aa769ba None] Agent tunnel out of sync with plugin!

Everything looks normal on /var/log/openvswitch/ovs-vswitchd.log:

2014-06-02T01:49:39.320Z|00014|bridge|INFO|ovs-vswitchd (Open vSwitch) 2.0.1
2014-06-02T01:49:42.410Z|00015|bridge|INFO|bridge br-tun: added interface br-tun on port 65534
2014-06-02T01:49:42.410Z|00016|bridge|INFO|bridge br-tun: using datapath ID 0000fee6495d2340
2014-06-02T01:49:42.410Z|00017|connmgr|INFO|br-tun: added service controller "punix:/var/run/openvswitch/br-tun.mgmt"
2014-06-02T01:49:42.463Z|00018|bridge|INFO|bridge br-int: added interface patch-tun on port 1
2014-06-02T01:49:42.555Z|00019|bridge|INFO|bridge br-tun: added interface patch-int on ...
(more)
edit retag flag offensive close merge delete

Comments

On working Neutron OVS&GRE system I get in /var/log/neutron/openvswitch-agent.log on the compute node

2014-06-02 06:55:11.066 1209 INFO neutron.agent.securitygroups_rpc [req-e3c6233b-430f-4e0e-a42e-cb5d27a65834 None] Preparing filters for devices set([u'87038189-3f9c-4304-8eda-3762347a108c']) 2014-06-02 06:55:12.179 1209 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-e3c6233b-430f-4e0e-a42e-cb5d27a65834 None] Port 87038189-3f9c-4304-8eda-3762347a108c updated. Details: {u'admin_state_up': True, u'network_id': u'3854bc88-ae14-47b0-9787-233e54ffe7e5', u'segmentation_id': 2, u'physical_network': None, u'device': u'87038189-3f9c-4304-8eda-3762347a108c', u'port_id': u'87038189-3f9c-4304-8eda-3762347a108c', u'network_type': u'gre'} 2014-06-02 06:55:12.181 1209 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-e3c6233b-430f-4e0e-a42e-cb5d27a65834 None] Assigning 1 as local vlan for net-id=3854bc88-ae14-47b0-9787-233e54ffe7e5 2014-06-02 06:55:12.764 1209 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-e3c6233b-430f-4e0e-a42e-cb5d27a65834 None] Configuration for device 87038189-3f9c-4304-8eda-3762347a108c completed. 2014-06-02 06:55:12.765 1209 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-e3c6233b-430f-4e0e-a42e-cb5d27a65834 None] Agent tunnel out of sync with plugin!

dbaxps gravatar imagedbaxps ( 2014-06-01 22:31:47 -0500 )edit

But,

[root@ip-192-169-142-137 ~]# ovs-ofctl show br-int
OFPT_FEATURES_REPLY (xid=0x2): dpid:0000da19c5b9154f
n_tables:254, n_buffers:256
capabilities: FLOW_STATS TABLE_STATS PORT_STATS QUEUE_STATS ARP_MATCH_IP
actions: OUTPUT SET_VLAN_VID SET_VLAN_PCP STRIP_VLAN SET_DL_SRC SET_DL_DST SET_NW_SRC SET_NW_DST SET_NW_TOS SET_TP_SRC SET_TP_DST ENQUEUE
 1(patch-tun): addr:be:f5:65:5f:b7:ef
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
 3(qvo87038189-3f): addr:4e:a4:66:75:3f:1c
     config:     0
     state:      0
     current:    10GB-FD COPPER
     speed: 10000 Mbps now, 0 Mbps max
 LOCAL(br-int): addr:da:19:c5:b9:15:4f
     config:     0
     state:      0
     speed: 0 Mbps now, 0 Mbps max
OFPT_GET_CONFIG_REPLY (xid=0x4): frags=normal miss_send_len=0
dbaxps gravatar imagedbaxps ( 2014-06-01 22:34:53 -0500 )edit

Please, post ovs-vsctl show && brctl show on Compute Node ( with VM running )

dbaxps gravatar imagedbaxps ( 2014-06-01 22:39:30 -0500 )edit

There you go:

root@stack-compute:~# ovs-vsctl show
90da1e30-993e-42fe-bdd2-d111647d22a9
    Bridge br-tun
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-0a010116"
            Interface "gre-0a010116"
                type: gre
                options: {in_key=flow, local_ip="10.1.1.23", out_key=flow, remote_ip="10.1.1.22"}
    Bridge br-int
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
        Port br-int
            Interface br-int
                type: internal
        Port "qvofaa9d401-aa"
            tag: 1
            Interface "qvofaa9d401-aa"
    ovs_version: "2.0.1"


root@stack-compute:~# brctl show
bridge name     bridge id               STP enabled     interfaces
qbrfaa9d401-aa          8000.3ad8997308e1       no              qvbfaa9d401-aa
                                                        tapfaa9d401-aa
virbr0          8000.000000000000       yes
ed gravatar imageed ( 2014-06-01 22:49:50 -0500 )edit

Looks OK. Can VM's instance receive IP from DHCP and ping to the internal gateway ?
$ ip netns exec qdhcp-your-private-net-id ifconfig
will report tap-interface
$ ip netns exec qdhcp-your-private-net-id tcpdump -ln -i tap-interface
$ ip netns exec qdhcp-your-private-net-id ping -c 5 Private-vm-IP

dbaxps gravatar imagedbaxps ( 2014-06-01 23:25:14 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2014-06-02 00:07:58 -0500

ed gravatar image

updated 2014-06-11 20:14:17 -0500

I just managed to fix it. The problem was caused because I was using the same subnet for mgmt and br-ext external 192.168.x.x) and a separate subnet for br-tun tunneling (10.x.x.x).

It seems there are no way around the 3 different subnets requirement. I had to add an additional VM to act as a switch between br-ext on the network node and my actual router which is also on my mgmt subnet. I created a separate subnet beween them and set it up on br-ext. After that I got the instances to access the internet.

edit flag offensive delete link more

Comments

Hi ed, I am also facing the same problem. For now, only VMs are able to ping each other. Instances are not pinging controller, compute and network nodes and vice versa. I can ping the instances only through namespaces. Instances are not able to access internet. Please suggest !!!!

smcas gravatar imagesmcas ( 2015-10-23 01:32:19 -0500 )edit
0

answered 2014-06-01 21:46:42 -0500

dbaxps gravatar image

updated 2014-06-02 02:49:23 -0500

You wrote $ ssh cirros@192.168.1.201
cirros@192.168.1.201's password: 
$ nslookup www.google.com
Server:    172.21.21.2
Address 1: 172.21.21.2 host-172-21-21-2.openstacklocal

nslookup: can't resolve 'www.google.com'

Update your private network with DNS value of your real ISP. If it won't help
1. Drop VM
2. Recreate interface on router to your private network ( already updated)
It should come up in ACTIVE mode
3.Launch VM again, make sure `nslookup` works now.
edit flag offensive delete link more

Comments

Now I get:

$ nslookup www.google.com
Server:    203.11.159.30
Address 1: 203.11.159.30

nslookup: can't resolve 'www.google.com'

BTW on my dashboard I get:

Interfaces

Name                      Fixed IPs           Status           Type          Admin State
(2264a202)         172.21.21.1             ACTIVE       Internal Interface         UP
(7911b5f0)        192.168.1.200            DOWN     External Gateway           UP

And br-int still shows as PORT/LINK DOWN on the compute node.

ed gravatar imageed ( 2014-06-01 22:19:15 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-06-01 21:27:13 -0500

Seen: 1,679 times

Last updated: Jun 11 '14