Ask Your Question
3

Problem with neutron-openvswitch-agent on Compute Node

asked 2014-09-19 14:02:49 -0600

Iván Sixto gravatar image

updated 2014-09-21 08:48:02 -0600

Hi,

First of all sorry for my english level... I've been having some issues lately...

I've got a 3-node installation -> Controler, Network and Compute node. I can't launch instances. Compute log report about "Unexpected vif_type=binding failed". Now I've been reading about this problem and tried to reinstall compute node. Now, I'am not able to run the neutron openvswitch agent on that node... So, the problem remains...

From Controller node:

[admin@zeus ~]$ neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+
| id                                   | agent_type         | host    | alive | admin_state_up |
+--------------------------------------+--------------------+---------+-------+----------------+
| 374c0837-9753-4d93-a90d-82034af445fe | Open vSwitch agent | pandora | :-)   | True           |
| 5b44e29f-7cee-4fb3-aa56-05591f0ddef4 | L3 agent           | pandora | :-)   | True           |
| 7997fec2-2195-4a6c-94d2-854b1b30f15a | DHCP agent         | pandora | :-)   | True           |
| a084079e-59aa-4dee-a403-8c359321269f | Metadata agent     | pandora | :-)   | True           |
+--------------------------------------+--------------------+---------+-------+----------------

So I see that Network Node (pandora) is not comunicating with Compute node...

From compute node:

[root@hades ~]# service neutron-openvswitch-agent status
neutron-openvswitch-agent interrumpido pero existe un archivo pid
# It says that neutron-openvswitch has been interrupted, but there is a pid file...

I've got no idea about what is happening so... I hope for some help...

NETWORK NODE CONFIG FILE: neutron.conf:

[root@pandora ~]# egrep -v '^(#|$)' /etc/neutron/neutron.conf

[DEFAULT]
[DEFAULT]
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = zeus
core_plugin = ml2
service_plugins = router
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://zeus:5000
auth_host = zeus
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = XXXXX
[database]
[service_providers]
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

NETWORK NODE CONFIG FILE: ml2_conf.ini

[root@pandora ~]# egrep -v '^(#|$)' /etc/neutron/plugins/ml2/ml2_conf.ini 
[ml2]
type_drivers = gre
tenant_network_types = gre
mechanism_drivers = openvswitch
[ml2_type_flat]
[ml2_type_vlan]
[ml2_type_gre]
tunnel_id_ranges = 1:1000
[ml2_type_vxlan]
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
[ovs]
local_ip = 192.168.11.3
tunnel_type = gre
enable_tunneling = True

NETWORK NODE CONFIG FILE: /ovs_neutron_plugin.ini

[root@pandora ~]# egrep -v '^(#|$)' /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini 
[ovs]
[agent]
[securitygroup]

COMPUTE NODE CONFIG FILE: nova.conf

[root@hades ~]#  egrep -v '^(#|$)' /etc/nova/nova.conf 
[DEFAULT]
[DEFAULT]
auth_strategy = keystone
rpc_backend = qpid
qpid_hostname = zeus
my_ip = 192.168.10.4
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 192.168.10.4
novncproxy_base_url = http://zeus:6080/vnc_auto.html
glance_host = zeus
network_api_class = nova.network.neutronv2.api.API
neutron_url = http://zeus:9696
neutron_auth_strategy = keystone
neutron_admin_tenant_name = service
neutron_admin_username = neutron
neutron_admin_password = XXXXXX
neutron_admin_auth_url = http://zeus:35357/v2.0
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
security_group_api = neutron
[baremetal]
[cells]
[conductor]
[database]
connection = mysql://nova:xxxxxx@zeus/nova
[hyperv]
[image_file_url]
[keymgr]
[keystone_authtoken]
auth_uri = http://zeus:5000
auth_host = zeus
auth_protocol = http
auth_port = 35357
admin_user = nova
admin_tenant_name = service
admin_password = XXXX

COMPUTE NODE CONFIG FILE: neutron.conf

[DEFAULT]
[DEFAULT]
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_qpid
qpid_hostname = zeus
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://zeus:8774/v2
nova_admin_username = nova
nova_admin_tenant_id =29bfba37794349c194dc6fd977b923e4
nova_admin_password = XXXXX
nova_admin_auth_url = http://zeus:35357/v2.0
core_plugin = ml2
service_plugins = router
verbose = True
[quotas]
[agent]
[keystone_authtoken]
auth_uri = http://zeus:5000
auth_host = zeus
auth_protocol = http
auth_port = 35357
admin_tenant_name = service
admin_user = neutron
admin_password = XXXXXXX
[database]
connection = mysql://neutron:XXXXXXX@zeus/neutron
[service_providers]
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

COMPUTE NODE CONFIG FILE: ml2_conf.ini

root@hades ~]#  egrep -v '^(#|$)' /etc/neutron/plugins/ml2/ml2_conf ...
(more)
edit retag flag offensive close merge delete

Comments

Hi, I am facing a similar issue too.I tried bringing up an instance in a vxlan network but it failed. When I debugged, I got the vif binding = Failed issue. further I found that the openvswitch agent on my nova nodes is not alive. I have a seperate neutron n nova host.if u found a solution, plz help

Nusrath gravatar imageNusrath ( 2015-12-04 07:33:20 -0600 )edit

3 answers

Sort by » oldest newest most voted
1

answered 2014-09-19 23:56:36 -0600

dbaxps gravatar image

updated 2014-09-21 07:15:24 -0600

Per https://kashyapc.fedorapeople.org/vir...

2.  ml2_conf.ini:

    $ cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep -v ^$ | grep -v ^#
    [ml2]
    type_drivers = gre
    tenant_network_types = gre
    mechanism_drivers = openvswitch
    [ml2_type_flat]
    [ml2_type_vlan]
    [ml2_type_gre]
    tunnel_id_ranges = 1:1000
    [ml2_type_vxlan]
    [securitygroup]
    firewall_driver =
    neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    enable_security_group = True
    [ovs]
    local_ip = 192.169.142.97
    [agent]
    tunnel_types = gre
    root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf


3. ovs_neutron_plugin.ini (same as ml2_config.ini):

    $ cat /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini | grep -v ^$ | grep -v
    [ml2]
    type_drivers = gre
    tenant_network_types = gre
    mechanism_drivers = openvswitch
    [ml2_type_flat]
    [ml2_type_vlan]
    [ml2_type_gre]
    tunnel_id_ranges = 1:1000
    [ml2_type_vxlan]
    [securitygroup]
    firewall_driver =
    neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
    enable_security_group = True
    [ovs]
    local_ip = 192.169.142.97
    [agent]
    tunnel_types = gre
    root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

$ cat /etc/neutron/neutron.conf | grep -v ^$ | grep -v ^# 
[DEFAULT]
verbose = True
debug = True
agent_down_time = 75
lock_path = $state_path/lock
notification_driver = neutron.openstack.common.notifier.rpc_notifier
auth_strategy = keystone
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = 192.169.142.97
rabbit_userid = guest
rabbit_password = fedora
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://192.169.142.97:8774/v2
nova_admin_username = nova
nova_admin_tenant_id = 3e112abc4c4b4214b8efbd627a32f75e
nova_admin_password = fedora
nova_admin_auth_url = http://192.169.142.97:35357/v2.0
core_plugin = ml2
service_plugins = router
[quotas]
[agent]
report_interval = 5
[keystone_authtoken]
auth_host = 192.169.142.97
auth_port = 35357
auth_protocol = http
admin_tenant_name = services
admin_user = neutron
admin_password = fedora
signing_dir = $state_path/keystone-signing
auth_uri = http://192.169.142.97:5000
[database]
connection = mysql://neutron:fedora@192.169.142.97/neutron
[service_providers]
service_provider=VPN:openswan:neutron.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default

Per your report :-

   cirros-ds 'net' up at 181.39
    checking http://169.254.169.254/2009-04-04/instance-id
    failed 1/20: up 181.40. request failed
    failed 2/20: up 183.54. request failed
    failed 3/20: up 185.55. request failed
    failed 4/20: up 187.56. request failed
    failed 5/20: up 189.57. request failed
    failed 6/20: up 191.58. request failed
    failed 7/20: up 193.59. request failed
    failed 8/20: up 195.60. request failed
    failed 9/20: up 197.61. request failed
    failed 10/20: up 199.62. request failed
    failed 11/20: up 201.63. request failed
    failed 12/20: up 203.64. request failed
    failed 13/20: up 205.65. request failed
    failed 14/20: up 207.66. request failed
    failed 15/20: up 209.67. request failed
    failed 16/20: up 211.68. request failed
    failed 17/20: up 213.69. request failed
    failed 18/20: up 215.70. request failed
    failed 19/20: up 217.71. request failed
    failed 20/20: up 219.71. request failed
    failed to read iid from metadata. tried 20

CirrOS instance failed to access nova-metadata server, cloud-init failed. View http://bderzhavets.blogspot.com/2014/06/rdo-setup-two-real-node.html

View *.conf files here https://kashyapc.fedorapeople.org/vir...

Mistake is here:  IPs should be replaced in second report

On Network Node

Bridge br-tun
        Port "gre-c0a80b03"
            Interface "gre-c0a80b03"
                type: gre
                options: {in_key=flow, local_ip="192.168.11.4", out_key=flow, remote_ip="192.168.11.3"}

On Compute Node 

 Port "gre-c0a80b03"
            Interface "gre-c0a80b03"
                type ...
(more)
edit flag offensive delete link more

Comments

Tried, but the problem remains...

I think there is a problem with de openvswitch agent... Neutron is not able to see the Compute Node agents...

Any idea?

By the way... Is it normal to not have any neutron logs at Compute node?

NOTE: I'll add openvswitch logs on top post.

Iván Sixto gravatar imageIván Sixto ( 2014-09-20 09:33:19 -0600 )edit

On both Controller && Compute

dbaxps gravatar imagedbaxps ( 2014-09-20 10:05:37 -0600 )edit

What do you mean? Do I try that neutron.conf on Controller node too or do I put controller log? :S

If is the first, I guess you mean Controller and no network node, right?

Sorry for the mess :S

Iván Sixto gravatar imageIván Sixto ( 2014-09-20 10:20:42 -0600 )edit

On both Neutron Server && Compute Node (Sorry)

dbaxps gravatar imagedbaxps ( 2014-09-20 12:10:35 -0600 )edit

No changes...

Iván Sixto gravatar imageIván Sixto ( 2014-09-20 21:02:50 -0600 )edit
0

answered 2014-09-21 02:31:32 -0600

geir gravatar image

Hi!

This is what I would've done.

  1. Since your virtual instances actually lives and sends DHCP discovers, do some tcpdumps on the different interfaces on the compute node to find out how far your discovers go.
  2. If the discovers go out from your compute node, then do some tcpdumps on the network node to see where the discovers end up there.

I've had similar problems as well, and the problem for me was simply the physical interface on the VM-configuration network was down. So, check your interfaces with; ip a. If the interface used in the VM-config network is down, try ifup int where int refers to the particular interface. If this solves it for you, then you can have the interface to be started at boot time so you don't have to run this command every time. I believe you run some Fedora OS? In my case, I ran CentOS, so I added the parameter ONBOOT=YES in my /etc/sysconfig/network-scripts/ifcfg-ethX file

edit flag offensive delete link more

Comments

Hi!

Thanks for your answer, but no results...

I mean, I've tried to do a tcpdump when I ping from one node to VM but only ARP request...

I think there is some problem with tunnels, cause there is no contact at the network...

agg, i've no idea... days ago all was working just fine...

Iván Sixto gravatar imageIván Sixto ( 2014-09-21 07:18:46 -0600 )edit
0

answered 2014-09-19 18:37:23 -0600

erenouf gravatar image

I haven't looked super carefully, but one thing did jump out at me from your config files that is different from mine. Each place that you have http://zeus:5000 I have the equivalent of http://zeus:5000/v2.0 , just like you have for the http://zeus:35357/v2.0 . Perhaps adding the /v2.0 to those lines will help.

edit flag offensive delete link more

Comments

Hi erenouf,

I forgot to mention. I tried that and same result ;)

Iván Sixto gravatar imageIván Sixto ( 2014-09-20 22:33:09 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2014-09-19 14:02:49 -0600

Seen: 3,501 times

Last updated: Sep 21 '14