Ask Your Question
0

Why do br-int & br-tun interfaces disappear after reboot? [closed]

asked 2015-01-12 09:29:48 -0500

mickt gravatar image

updated 2015-01-15 04:12:08 -0500

I've installed juno on CentOS 7 and I've just noticed that br-int & br-tun interfaces are gone after a reboot. Anyone encounter this?

controller openvswitch-agent.log:

2015-01-13 14:49:54.611 4498 ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Failed reporting state!
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last):
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 259, in _report_state
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     self.use_call)
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/agent/rpc.py", line 70, in report_state
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     return self.call(context, msg)
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/common/log.py", line 34, in wrapper
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     return method(*args, **kwargs)
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 161, in call
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     context, msg, rpc_method='call', **kwargs)
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/neutron/common/rpc.py", line 187, in __call_rpc_method
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     return func(context, msg['method'], **msg['args'])
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 389, in call
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     return self.prepare().call(ctxt, method, **kwargs)
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/rpc/client.py", line 152, in call
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     retry=self.retry)
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/transport.py", line 90, in _send
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     timeout=timeout, retry=retry)
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 408, in send
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     retry=retry)
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.7/site-packages/oslo/messaging/_drivers/amqpdriver.py", line 397, in _send
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     result = self._waiter.wait(msg_id, timeout)
2015-01-13 14:49:54.611 4498 TRACE neutron.plugins.openvswitch ...
(more)
edit retag flag offensive reopen merge delete

Closed for the following reason the question is answered, right answer was accepted by mickt
close date 2015-01-15 05:41:35.051569

Comments

Does neutron-openvswitch-agent start after reboot on Network Node && Compute Node ?

dbaxps gravatar imagedbaxps ( 2015-01-12 10:06:06 -0500 )edit

Yes, it is showing as running. I have updated the above with further detail.

mickt gravatar imagemickt ( 2015-01-13 03:07:07 -0500 )edit

Wait , packstack failed to bring up neutron-openvswitch-agent on compute node. Then you wrote:
A workaround was provided where by this file is created (even though ls shows it as existing after above failure) and then reinstall etc.

dbaxps gravatar imagedbaxps ( 2015-01-13 03:46:56 -0500 )edit

Sorry, I cannot follow you. Just now installed this service on several compute nodes (F21) via packstack with no problems. I would suggest you follow RDO install instructions.

dbaxps gravatar imagedbaxps ( 2015-01-13 03:47:30 -0500 )edit

I am following instruction. If I kickstart the servers I'm sure it will install correctly. Reinstalling seems not to work, hence the workaround.

Have you reinstalled on an existing server? If so, did you have to prep the server? E.g. restore interface as br-ex causes issue.

mickt gravatar imagemickt ( 2015-01-13 04:07:50 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-01-14 07:02:49 -0500

dbaxps gravatar image

updated 2015-01-14 07:06:15 -0500

Using answer field as comment due to formatting options
Please, view my sample here :- http://bderzhavets.blogspot.com/2014/...
I have several concerns. Does packstack understand :-

CONFIG_NOVA_COMPUTE_PRIVIF=bond0
CONFIG_NOVA_NETWORK_PUBIF=bond0
CONFIG_NOVA_NETWORK_PRIVIF=bond0
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:bond0
CONFIG_NEUTRON_OVS_TUNNEL_IF=bond0

You have

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=br-ex:bond0

In my sample

CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=physnet1:br-ex
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=

and

CONFIG_NOVA_COMPUTE_PRIVIF=enp5s1
CONFIG_NOVA_NETWORK_PUBIF=enp2s0
CONFIG_NOVA_NETWORK_PRIVIF=enp5s1
CONFIG_NEUTRON_OVS_TUNNEL_IF=enp5s1

In your notation how would packstack understand where is management interface and where tunnel's interface ?

edit flag offensive delete link more

Comments

The same interface is used for all as my servers currently only have one connected network. The interface is configured as bond0 to work around CNDN because packstack seems to require that compute host interfaces to be the same.

I will retry with empty CONFIG_NEUTRON_OVS_BRIDGE_IFACES parameter.

mickt gravatar imagemickt ( 2015-01-14 07:09:05 -0500 )edit

Are you saying that this will not work? This configuration worked with icehouse. There is only one connected network to each server and the servers are distributed between racks so I don't currently have a choice. In time if/when I migrate to openstack, I'll use separate interfaces.

mickt gravatar imagemickt ( 2015-01-14 07:44:48 -0500 )edit

Omitting above parameter results in br-ex not having IP and no ifcfg-br-ex file.

mickt gravatar imagemickt ( 2015-01-14 09:42:30 -0500 )edit

Although ifconfig does not show br-int & br-tun, ip a does though it shows them as being down. ovs-vsctl also shows bridge detail; see main body. I can also create tenant networks and instances and have internal and external network connectivity etc.

mickt gravatar imagemickt ( 2015-01-15 04:31:31 -0500 )edit

In a single interface configuration, is the current behaviour correct, given ovs-vsctl show seems to look ok?

mickt gravatar imagemickt ( 2015-01-15 04:58:15 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-01-12 09:29:48 -0500

Seen: 820 times

Last updated: Jan 15 '15