Ask Your Question

Adding an interface to router fails after migration Havana -> IceHouse

asked 2015-06-17 03:49:07 -0500

alvise gravatar image

updated 2015-06-17 03:50:28 -0500

Hi after a migration of Havana to IceHouse (using controller and network services/agents on the same physical node, and using OVS/GRE) we started facing some network-related problems (the internal tag of the element shown by ovs-vsctl show was set to 4095, which is wrong AFAIK). At the beginning the problems could be solved by just restarting the openvswitch related agents (and openvswitch itself), or changing the tag by hand; but now the networking definitely stopped working.

When we add a new router interface connected to a tenant lan, the related Port is created in "DOWN" state. The in the openvswitch-agent.log we see this errore message:

2015-06-16 15:07:43.275 40708 WARNING neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Device ba295e45-9a73-48c1-8864-a59edd5855dc not defined on plugin

and the related record in the neutron database (table ml2_port_bindings) is like:

mysql> select * from ml2_port_bindings where port_id='fced99c2-007c-40c9-a3d6-ca8d387893c3'; +--------------------------------------+-------------------+----------------+--------+---------+-----------+-------------+---------+ | port_id | host | vif_type | driver | segment | vnic_type | vif_details | profile | +--------------------------------------+-------------------+----------------+--------+---------+-----------+-------------+---------+ | fced99c2-007c-40c9-a3d6-ca8d387893c3 | first-cld-ctrl-02 | binding_failed | NULL | NULL | normal | | {} | +--------------------------------------+-------------------+----------------+--------+---------+-----------+-------------+---------+ 1 row in set (0.00 sec)

The relevant config files are:

/etc/neutron/plugins/ml2/ml2_conf.ini :

type_drivers = gre 
tenant_network_types = gre 
mechanism_drivers = openvswitch 
tunnel_id_ranges = 1:1000 
firewall_driver = 
enable_security_group = True 
local_ip = 
tunnel_type = gre 
enable_tunneling = True

/etc/neutron/neutron.conf :

nova_ca_certificates_file = /etc/grid-security/certificates/INFN-CA-2006.pem 
auth_strategy = keystone 
rpc_backend = neutron.openstack.common.rpc.impl_kombu 
rabbit_hosts =, 
notify_nova_on_port_status_changes = True 
notify_nova_on_port_data_changes = True 
nova_url = 
nova_admin_username = nova 
nova_admin_tenant_id = 1b2caeedb3e2497b935723dc6e142ec9 
nova_admin_password = XXXXXXXXX 
nova_admin_auth_url = 
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin 
service_plugins = 
verbose = True 
debug = False 
rabbit_ha_queues = True 
dhcp_agents_per_network = 2 
auth_uri = 
auth_url = 
auth_host = 
auth_protocol = https 
auth_port = 35357 
admin_tenant_name = services 
admin_user = neutron 
admin_password = XXXXXXXXX 
cafile = /etc/grid-security/certificates/INFN-CA-2006.pem 
connection = mysql://neutron_prod:XXXXXXXXXX [at] 192/neutron_prod 

And here

the output of "ovs-vsctl show".

Any suggestion ?


edit retag flag offensive close merge delete


More details: we've two external nets and two routers ("router-lan" whose gateway's set to one of the external nets, and a "router-wan" connected to the other). Each router is managed by two different L3 agents whose conf files are:

alvise gravatar imagealvise ( 2015-06-17 08:39:04 -0500 )edit

we've also, unsuccesfully, deleted and re-made: - routers - external networks - br-ex (that one related to the first external net) - br-ex2 (that one related to the second external net) - br-int My "feeling" is that there's something permanently "corrupted" in some neutron's table...

alvise gravatar imagealvise ( 2015-06-17 08:42:02 -0500 )edit

1 answer

Sort by ยป oldest newest most voted

answered 2015-06-17 23:43:02 -0500

sgaravat gravatar image

updated 2015-06-17 23:45:55 -0500

It looks like that the problem was the 2 L3 agents, and in particular the use of the "host: attribute in the 2 L3-agent configi files. With Havana-OVS this worked without problems. With IceHouse and ML2 (I guess it is ML2) apparently there are problems if the host of the L3 agent is different wrt the hostname of the machine hosting it

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools



Asked: 2015-06-17 03:49:07 -0500

Seen: 213 times

Last updated: Jun 17 '15