Ask Your Question
0

(ML2/OVS/GRE) network issues of VM instance in a multi-node Icehouse Enviroment

asked 2014-07-02 22:34:53 -0500

sunzen gravatar image

updated 2014-07-04 03:22:10 -0500

Hi,

System configuration of my test multi-node environment is as following:

node1: controller + network node + compute node (192.168.200.150, on CentOS 6.5) 
node2: compute node (192.168.200.187, on CentOS6.5) 
node3: compute node (192.168.200.232, On Ubuntu 14.04)

Node 1 and node2 were deployed with RDO, and node3 was extended according official installation guide. And ML2/OVS/GRE was configured.

The problem is as following:

VM instance can be launched on any compute node, and on horizon it can be seen that private IP and floating IP can be assigned, but when verifying connection with ping/ssh, only VM instance on node 1 can be accessed, VM instances on other nodes are not accessible via network.

The console log partly shows as following:

udhcpc (v1.20.1) started 
Sending discover...
Sending discover... 
Sending discover... 
No lease, failing WARN: /etc/rc3.d/S40-network failed
... 
=== network info === 
if-info: lo,up,127.0.0.1,8,::1
if-info: eth0,up,,8,fe80::f816:3eff:fe4e:2783

Actually the virtual nic did not get IP address!

I checked openvswitch-agent.log, and found special information as following:

  • @node1
ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-c1780b5d-a920-4f1f-929a-50cf9456aa2e None] Failed to set-up gre tunnel port to 192.168.200.232
ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-738dabfa-47fc-481b-8e8d-15e52b507f0b None] Failed to set-up gre tunnel port to 192.168.200.187
  • @node2
ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Failed to set-up gre tunnel port to 192.168.200.150
ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Failed to set-up gre tunnel port to 192.168.200.232
WARNING neutron.agent.linux.ovs_lib [-] Found failed openvswitch port: [u'int-br-eth0', [u'map', []], -1
WARNING neutron.agent.linux.ovs_lib [-] Found failed openvswitch port: [u'int-br-eth0', [u'map', []], -1]

I just started to use GRE, not familiar with its configuration yet.

My current configuration is as following:

  • @node1: enable flat provider network and gre tenant network
[OVS] 
local_ip=192.168.200.150 
enable_tunneling=True 
integration_bridge=br-int 
#tunnel_bridge=br-tun
bridge_mappings=physnet1:br-eth0
  • @node2
[OVS] 
local_ip=192.168.200.187 
enable_tunneling=True 
tunnel_type = gre 
integration_bridge=br-int 
#tunnel_bridge=br-tun
#bridge_mappings=physnet1:br-eth0

Any idea?

If more information is needed, please show me.

Your guidance is appreciated.

Latest progress

Referring to dbaxps's direction, i modified my plugin.in, which is as following

 [ml2]
 type_drivers = flat,gre
 tenant_network_types = gre
 mechanism_drivers =openvswitch
 [ml2_type_flat]
 flat_networks =*
 [ml2_type_vlan]
 [ml2_type_gre]
 tunnel_id_ranges =10:1000
 [ml2_type_vxlan]
 [ovs]
 local_ip=192.168.200.187
 [agent]
 tunnel_types = gre
 root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
 integration_bridge=br-int  
 [securitygroup]
 enable_security_group = True
 firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

It still doesn't work. There is special log in openvswitch-agent.log (either of node 1 or node2)

2014-07-04 15:35:02.172 25684 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ovs-ofctl', 'dump-flows', 'br-int', 'table=22'] create_process /usr/lib/python2.6/site-packages/neutron/agent/linux/utils.py:48
2014-07-04 15:35:02.264 25684 DEBUG neutron.agent.linux.utils [-] 
Command: ['sudo', '/usr/bin/neutron-rootwrap', '/etc ...
(more)
edit retag flag offensive close merge delete

Comments

Your plugin.ini/ml2_conf.ini files are incorrect . Please view RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&GRE Cluster on Fedora 20 regarding installation and tunning procedure on F20, which maybe easily updated for CentOS 6.5.

dbaxps gravatar imagedbaxps ( 2014-07-02 23:17:43 -0500 )edit
1

looks like you openvswitch is not installed properly.

darragh-oreilly gravatar imagedarragh-oreilly ( 2014-07-03 15:20:08 -0500 )edit

Thank you for your attention and help. Referring to dbaxps's direction, i modified plugin.ini, almost same as the example. it still doesn't work yet. I'm thinking how to put the lconfiguration information and openvswitch-agent.log unto web.

sunzen gravatar imagesunzen ( 2014-07-04 02:33:18 -0500 )edit

Please , post ls -l /etc/neutron

dbaxps gravatar imagedbaxps ( 2014-07-04 02:38:56 -0500 )edit

Hi dbaxps, i posted the contents on the original post

sunzen gravatar imagesunzen ( 2014-07-04 03:24:02 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-07-02 23:25:38 -0500

dbaxps gravatar image

updated 2014-07-02 23:27:36 -0500

Your plugin.ini/ml2_conf.ini files are incorrect . Please view RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&GRE Cluster on Fedora 20 regarding installation and tunning procedure on F20, which maybe easily updated for CentOS 6.5.

[root@icehouse1 neutron(keystone_admin)]# cat plugin.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
local_ip = 192.168.1.127
[agent]
tunnel_types = gre
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-07-02 22:34:53 -0500

Seen: 3,176 times

Last updated: Jul 04 '14