Ask Your Question
0

Open vSwitch agent not comming up on second compute using gre tunnel

asked 2014-11-26 10:00:44 -0600

vishwanath shivappa gravatar image

updated 2014-11-26 23:32:29 -0600

dbaxps gravatar image

I have all in one setup running on centos7 and juno. every thing work fine on the all in hosts. I have 2nd compute node ever the compute server is up. but the netutron open vswitch is not coming up.

   nova service-list
    +----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
    | Id | Binary           | Host   | Zone     | Status  | State | Updated_at                 | Disabled Reason |
    +----+------------------+--------+----------+---------+-------+----------------------------+-----------------+
    | 1  | nova-cert        | hp3bl1 | internal | enabled | up    | 2014-11-26T21:28:37.000000 | -               |
    | 2  | nova-consoleauth | hp3bl1 | internal | enabled | up    | 2014-11-26T21:28:37.000000 | -               |
    | 3  | nova-scheduler   | hp3bl1 | internal | enabled | up    | 2014-11-26T21:28:37.000000 | -               |
    | 4  | nova-conductor   | hp3bl1 | internal | enabled | up    | 2014-11-26T21:28:43.000000 | -               |
    | 6  | nova-compute     | hp3bl1 | nova     | enabled | up    | 2014-11-26T21:28:43.000000 | -               |
    | 7  | nova-compute     | hp3bl2 | nova     | enabled | up    | 2014-11-26T21:28:41.000000 | -               |
    +----+------------------+--------+----------+---------+-------+----------------------------+-----------------+

# neutron agent-list
+--------------------------------------+--------------------+--------+-------+----------------+---------------------------+
| id                                   | agent_type         | host   | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+--------+-------+----------------+---------------------------+
| 53cfb703-0681-4b49-b5c2-15f7af93dad8 | Open vSwitch agent | hp3bl1 | :-)   | True           | neutron-openvswitch-agent |
| 8a3b1e6a-3431-40ce-aa77-9519430c63db | L3 agent           | hp3bl1 | :-)   | True           | neutron-l3-agent          |
| af8dac02-1962-4003-b668-9826d33a0870 | Metadata agent     | hp3bl1 | :-)   | True           | neutron-metadata-agent    |
| c274d852-f03a-4ea1-afaa-cd8aaf4b1145 | DHCP agent         | hp3bl1 | :-)   | True           | neutron-dhcp-agent        |
+--------------------------------------+--------------------+--------+-------+----------------+---------------------------+

cat openvswitch-agent.log

2014-11-26 21:37:27.014 8704 ERROR neutron.agent.linux.ovsdb_monitor [-] Error received from ovsdb monitor: ovsdb-client: unix:/var/run/openvswitch/db.sock: receive failed (End of file)
2014-11-26 21:37:27.038 8704 ERROR neutron.agent.linux.utils [-]
Command: ['ps', '--ppid', '8813', '-o', 'pid=']
Exit code: 1
Stdout: ''
Stderr: ''
2014-11-26 21:37:27.250 8704 CRITICAL neutron [req-4922694e-4b9e-43df-b887-b3cb2fac728e None] AssertionError: Trying to re-send() an already-triggered event.
2014-11-26 21:37:27.250 8704 TRACE neutron Traceback (most recent call last):
2014-11-26 21:37:27.250 8704 TRACE neutron   File "/usr/bin/neutron-openvswitch-agent", line 10, in <module>
2014-11-26 21:37:27.250 8704 TRACE neutron     sys.exit(main())
2014-11-26 21:37:27.250 8704 TRACE neutron   File "/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1545, in main
2014-11-26 21:37:27.250 8704 TRACE neutron     agent.daemon_loop()
2014-11-26 21:37:27.250 8704 TRACE neutron   File "/usr/lib/python2.7/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1472, in daemon_loop
2014-11-26 21:37:27.250 8704 TRACE neutron     self.rpc_loop(polling_manager=pm)
2014-11-26 21:37:27.250 8704 TRACE neutron   File "/usr/lib64/python2.7/contextlib.py", line 24, in __exit__
2014-11-26 21:37:27.250 8704 TRACE neutron     self.gen.next()
2014-11-26 21:37:27.250 8704 TRACE neutron   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/polling.py", line 39, in get_polling_manager
2014-11-26 21:37:27.250 8704 TRACE neutron     pm.stop()
2014-11-26 21:37:27.250 8704 TRACE neutron   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/polling.py", line 106, in stop
2014-11-26 21:37:27.250 8704 TRACE neutron     self._monitor.stop()
2014-11-26 21:37:27.250 8704 TRACE neutron   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/async_process.py", line 89, in stop
2014-11-26 21:37:27.250 8704 TRACE neutron     self._kill()
2014-11-26 21:37:27.250 8704 TRACE neutron   File "/usr/lib/python2.7/site-packages/neutron/agent/linux/ovsdb_monitor.py", line 99, in _kill
2014-11-26 21:37:27.250 8704 TRACE neutron     super(SimpleInterfaceMonitor, self)._kill(*args, **kwargs)
2014-11-26 21:37:27.250 8704 TRACE neutron   File ...
(more)
edit retag flag offensive close merge delete

Comments

Submit /var/log/neutron/openvswitch-agent.log on Compute node.

dbaxps gravatar imagedbaxps ( 2014-11-26 12:28:55 -0600 )edit

Please, post

1. /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini on Compute Node
2. ovs-vsctl show on compute node
3. ifconfig on compute node
dbaxps gravatar imagedbaxps ( 2014-11-26 23:53:12 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2014-12-02 16:21:17 -0600

mpetason gravatar image

You will need to look at the files recommended by dbaxps. If you copied this over from the original node then you may not have updated the IP addresses associated with the plugin. You'll want to assume this is a multi node and verify the configuration against the multi node setup:

This:

http://docs.openstack.org/havana/inst...

Plus this:

http://docs.openstack.org/havana/inst...

You can change to Icehouse if you are using Icehouse, as well you can change to whichever OS you are using. The steps are always very similar.

I'm betting there are issues with this part of the configuration:

[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = DATA_INTERFACE_IP

Check the Local_IP information.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-11-26 10:00:44 -0600

Seen: 2,517 times

Last updated: Dec 02 '14