Ask Your Question
0

openvswitch lost patch-tun port and neutron-openvswitch start with error:Error while processing VIF ports

asked 2014-06-15 22:40:04 -0500

winger127 gravatar image

Short description of problem: The bridge br-int no patch-tun port, I add that port but will lost after I restart the neutron-openvswitch-agent

Details on the environment: 1 controller node(with the network node), 1 compute node. OpenStack is configured to use Neutron/gre as network service. system is CENTOS 6.5 and ICEHOUSE, deployed by RDO.

Details of the problem: Firstly, I noticed the instance didn't get ip address by dhcp, so I checked the openvswitch, found no port named patch-tun under br-int, so I add it manually. but after I restart the neutron-openvswitch-agent, the port will lost again. Check the openvswitch-agent.log, I notice some TRACE happend:

    2014-06-16 10:50:52.307 745 INFO neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-c9ee9fcb-67dc-4f7f-87fd-85280b66ce72 None] Port 702cfcec-b389-4077-9816-abd378028b1b updated. Details: {u'admin_state_up': True, u'network_id': u'0284f819-0e40-426b-b912-d33057d80dba', u'segmentation_id': 2, u'physical_network': None, u'device': u'702cfcec-b389-4077-9816-abd378028b1b', u'port_id': u'702cfcec-b389-4077-9816-abd378028b1b', u'network_type': u'gre'}

    2014-06-16 10:50:52.370 745 ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [req-c9ee9fcb-67dc-4f7f-87fd-85280b66ce72 None] Error while processing VIF ports
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last):
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1226, in rpc_loop
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     sync = self.process_network_ports(port_info)
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1080, in process_network_ports
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     devices_added_updated)
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 985, in treat_devices_added_or_updated
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     self.context, device, self.agent_id, cfg.CONF.host)
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/agent/rpc.py", line 107, in update_device_up
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     topic=self.topic)
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/proxy.py", line 125, in call
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     result = rpc.call(context, real_topic, msg, timeout)
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 112, in call
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     return _get_impl().call(CONF, context, topic, msg, timeout)
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/impl_kombu.py", line 818, in call
    2014-06-16 10:50:52.370 745 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     rpc_amqp.get_connection_pool(conf ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-06-15 23:00:09 -0500

dbaxps gravatar image

You can follow this Howto been written for F20, replacing p37p1 by eth0, p4p1 by eth1, and obviously using your IPs RDO Setup Two Real Node (Controller+Compute) IceHouse Neutron ML2&OVS&GRE Cluster on Fedora 20

[root@icehouse1 neutron(keystone_admin)]# ls -l
total 88
-rw-r--r--. 1 root root      193 Jun 13 05:19 api-paste.ini
-rw-r-----. 1 root neutron  3902 Jun 13 08:31 dhcp_agent.ini
-rw-r--r--. 1 root root       73 Jun 13 08:32 dnsmasq.conf
-rw-r-----. 1 root neutron   109 Apr 17 15:50 fwaas_driver.ini
-rw-r-----. 1 root neutron  3431 Jun 13 05:19 l3_agent.ini
-rw-r-----. 1 root neutron  1400 Apr 17 15:50 lbaas_agent.ini
-rw-r-----. 1 root neutron  1481 Jun 13 07:52 metadata_agent.ini
-rw-r--r--. 1 root root      324 Jun 13 05:36 metadata_agent.out
lrwxrwxrwx. 1 root root       37 Jun 13 06:42 ml2_conf.ini -> /etc/neutron/plugins/ml2/ml2_conf.ini
-rw-r-----. 1 root neutron 19089 Jun 13 06:03 neutron.conf
rwxrwxrwx. 1 root root       55 Jun 13 08:05 plugin.ini -> /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
drwxr-xr-x. 4 root root     4096 Jun 13 05:19 plugins
-rw-r-----. 1 root neutron  6148 Apr 17 15:50 policy.json
-rw-r--r--. 1 root root       80 May 19 19:53 release
-rw-r--r--. 1 root root     1216 Apr 17 15:50 rootwrap.conf
[root@icehouse1 neutron(keystone_admin)]# cat plugin.ini
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
local_ip = 192.168.1.127
# local_ip = 192.168.0.127
[agent]
tunnel_types = gre
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
edit flag offensive delete link more

Comments

A core approach to this issue comes from http://kashyapc.fedorapeople.org/virt...
However, in my case it was done via packstack with answer file attached, and plugin.ini was added manually exactly the same as ml2_conf.ini , generated by packstack
System is already up and running for 4 days. I didn't add manually any OVS bridges , plugin.ini was added and neutron-openvswitch-agent was restarted.

dbaxps gravatar imagedbaxps ( 2014-06-15 23:06:39 -0500 )edit

why your local_ip is 192.168.1.127? Does the ip belong to one ethernet of the neutron node ?

winger127 gravatar imagewinger127 ( 2014-06-16 04:19:48 -0500 )edit

It's my IP address of p37p1 on Controller+NeutronServer node.
You are supposed to place yours for eth0 on controller node(with the network node)
If you spiltted controller and NeutronServer , then it's IP of eth0 on NeutronServer

dbaxps gravatar imagedbaxps ( 2014-06-16 04:27:14 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-06-15 22:40:04 -0500

Seen: 1,494 times

Last updated: Jun 15 '14