# neutron-openvswitch-agent not starting up on compute node

Hello, I am trying to setup JUNO with two node/two interfaces each setup. Went through plenty of blogs on net. I am also using packstack for setting this up. After the installation is succeeded I changed the interfaces so that br-ex on controller node can reach a subnet I wanted to reach. Launch of instance is failing and further verification showed that the neutron-openvswitch-agent is not running on the compute. Can some one suggest what to verify for correcting my configuration? I am not seeing ml2 folder on compute node (i can see it on controller/network node). Is this expected?

    [root@controller-network ~(keystone_admin)]# neutron agent-list
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host               | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
| 03659441-fc17-4af2-8b53-f7867a653173 | Open vSwitch agent | controller-network | :-)   | True           | neutron-openvswitch-agent |
| 2e9fdcae-32db-489d-a153-09905243cb26 | DHCP agent         | controller-network | :-)   | True           | neutron-dhcp-agent        |
| 68011b95-b3c6-4d92-a679-8d1cc70a3e15 | Open vSwitch agent | compute            | xxx   | True           | neutron-openvswitch-agent |
| 68a020f7-f5e6-4247-9f23-db970c29b1d3 | L3 agent           | controller-network | :-)   | True           | neutron-l3-agent          |
| e02bc776-850f-4b04-808e-2d8b9be9e2ab | Metadata agent     | controller-network | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+

+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host               | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+
| 03659441-fc17-4af2-8b53-f7867a653173 | Open vSwitch agent | controller-network | :-)   | True           | neutron-openvswitch-agent |
| 2e9fdcae-32db-489d-a153-09905243cb26 | DHCP agent         | controller-network | :-)   | True           | neutron-dhcp-agent        |
| 68011b95-b3c6-4d92-a679-8d1cc70a3e15 | Open vSwitch agent | compute            | xxx   | True           | neutron-openvswitch-agent |
| 68a020f7-f5e6-4247-9f23-db970c29b1d3 | L3 agent           | controller-network | :-)   | True           | neutron-l3-agent          |
| e02bc776-850f-4b04-808e-2d8b9be9e2ab | Metadata agent     | controller-network | :-)   | True           | neutron-metadata-agent    |
+--------------------------------------+--------------------+--------------------+-------+----------------+---------------------------+

root@controller-network ~]# egrep -v '(^$|^#)' /etc/neutron/plugin.ini [ml2] type_drivers = vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch [ml2_type_flat] [ml2_type_vlan] [ml2_type_gre] [ml2_type_vxlan] vni_ranges =10:100 vxlan_group =224.0.0.1 [securitygroup] enable_security_group = True [root@controller-network ~]# egrep -v '(^$|^#)'  /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers = vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
[ml2_type_vxlan]
vni_ranges =10:100
vxlan_group =224.0.0.1
[securitygroup]
enable_security_group = True

[root@controller-network ~]# egrep -v '(^$|^#)' /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini [ovs] enable_tunneling = True integration_bridge = br-int tunnel_bridge = br-tun local_ip = 13.12.15.45 [agent] polling_interval = 2 tunnel_types =vxlan vxlan_udp_port =4789 l2_population = False arp_responder = False enable_distributed_routing = False [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver [root@compute ~(keystone_admin)]# egrep -v '(^$|^#)'  /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini
[ovs]
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip =13.12.15.47
[agent]
polling_interval = 2
tunnel_types =vxlan
vxlan_udp_port =4789
l2_population = False
arp_responder = False
enable_distributed_routing = False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

edit retag close merge delete

You need to provide the logs from the openvswitch agent on the compute node in question. On the compute node where ovs agent is not starting enable debug=true within neutron.conf and start the ovs agent. Then provide /var/log/neutron/openvswitch-agent.log

( 2015-04-24 10:24:32 -0500 )edit

ml2 plugin is only on the neutron api server... ie where neutron-server runs

( 2015-04-24 10:25:46 -0500 )edit

Sort by » oldest newest most voted

I am still newbie to the openstack and did not have full picture of how each and every service interacts.. I tried the following scenarios and neutron-openvswitch-agent came up..

1. Ensured that the time on Controller and Compute is correct (Before it was off for what ever reason)
2. installed the ML2 package on compute node also
3. Updated the ml2_conf.ini on both the Controller and Compute nodes as below (local_ip on compute to be the local IP of compute node.).
4. restarted the machines and I see the agent is up. This also made to create instances successfully.

[root@controller-network ~(keystone_admin)]# egrep -v '(^$|^#)' /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = vxlan tenant_network_types = vxlan mechanism_drivers = openvswitch [ml2_type_flat] [ml2_type_vlan] [ml2_type_gre] [ml2_type_vxlan] vni_ranges = 10:100 vxlan_group = 224.0.0.1 [securitygroup] enable_security_group = True **[ovs]** vxlan_udp_port=4789 tenant_network_type = vxlan tunnel_type = vxlan integration_bridge = br-int tunnel_bridge = br-tun local_ip = 13.12.15.45 enable_tunneling = True **[agent]** l2_population = True tunnel_types = vxlan polling_interval=2  more In case neutron-openvswitch-agent not starting up on compute node and other neutron agents are working on controller node then it may be the case that your tunnel bridge is not there on compute node for networking. Check using :-$ ovs-vsctl show

and there should be br-tun present to communicate. If its not there then check the reason for it.

Also if you are using SRIOV concept on your compute node then assign the tunnel bridge to the interface using SRIOV concept means interface having dynamic vnic's under it. Which can be done using following commands :-

2. vim /etc/neutron/plugins/ml2/ml2_conf.ini
more

If you have a time offset between the compute node and the neutron server, greater than the permissible agent down time (you can look at the downtime value in neutron.conf) - the neutron server would treat the request as Invalid. I recently hit this and enhanced some logging around the problem via this review: https://review.openstack.org/#/c/165539/

So now atleast, you should get a clear ERROR in the neutron server log file about the possible cause for the openvswitch agent not registering to the ML2 plugin on your server. But this code fix has been for Liberty.

more