Ask Your Question
-1

Ongoing problem with starting neutron-server on Fedora 20 (RDO Havana)

asked 2014-02-09 06:47:13 -0500

dbaxps gravatar image

updated 2014-02-09 11:22:44 -0500

In previous successful attempt  setting up  two node Neutron+GRE+OVS‏  

[following Kashyap directions](http://kashyapc.fedorapeople.org/virt/openstack/Two-node-Havana-setup.txt)

on real F20 boxes I was able to start neutron-server with ( using native F20 repos)   with  plugin.ini

[root@dfw02 neutron(keystone_admin)]$ cat plugin.ini | grep -v ^# | grep -v ^$
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.1.127
[agent]
[securitygroup]
[DATABASE]
sql_connection = mysql://root:password@dfw02.localdomain/ovs_neutron
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

and finally

[root@dfw02 ~]# ovs-vsctl show
7d78d536-3612-416e-bce6-24605088212f
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port patch-tun
            Interface patch-tun
                type: patch
                options: {peer=patch-int}
    Bridge br-ex
        Port "p37p1"
            Interface "p37p1"
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-tun
        Port br-tun
            Interface br-tun
                type: internal
        Port "gre-2"
            Interface "gre-2"
                type: gre
                options: {in_key=flow, local_ip="192.168.1.127", out_key=flow, remote_ip="192.168.1.137"}
        Port patch-int
            Interface patch-int
                type: patch
                options: {peer=patch-tun}
    ovs_version: "2.0.0"

Compute node instances were able to obtain floating and internal ip addresses I am running this TwoNode Cluster in mean time with all yum updates after 01/23/2014

In new attempt on fresh F20 instance Neutron-server may be started only with plugin.ini contains sql_connection as follows :

[DATABASE]
sql_connection = mysql://root:password@localhost/ovs_neutron

Block like :-

Port "gre-2"
            Interface "gre-2"
                type: gre
                options: {in_key=flow, local_ip="192.168.1.147", out_key=flow, remote_ip="192.168.1.157"}

doesn't appear in `ovs-vsctl show` output  . Nothing works on Compute all Configs are the the same as in first attempt.

The error from mysql, which I get "Access denied fror 'root"@'new_hostname' new_hostname as before is in /etc/hosts

192.168.1.147 new_hostname.localdomain new_hostname

and in /etc/hostname new_hostname.localdomain

For me it looks like bug for neutron-server to be bind to 127.0.0.1 ,actually, connected with MariaDB database.

I did 2 attempts to reproduce it from scratch building Controller and every time Neutron-server start up limitation came up.

On compute node  192.168.1.137 :-

[root@dfw01 neutron]# cat plugin.ini
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.168.1.137
[agent]
[securitygroup]
[DATABASE]
sql_connection = mysql://root:password@192.168.1.127/ovs_neutron
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

It cannot work any longer affecting neutron-openvswitch service on compute node

Per Original docs

On Controller

2. (OVS) plugin.ini

$ cat /etc/neutron/plugin.ini | grep -v ^$ | grep -v ^#
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.169.142.49
[agent]
[securitygroup]
[DATABASE]
sql_connection = mysql://neutron:fedora@node1-controller/ovs_neutron
sql_max_retries=10
reconnect_interval=2
sql_idle_timeout=3600
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

On Compute

2. (OVS) plugin.ini

$ cat plugin.ini | grep -v ^$ | grep -v ^#
[ovs]
tenant_network_type = gre
tunnel_id_ranges = 1:1000
enable_tunneling = True
integration_bridge = br-int
tunnel_bridge = br-tun
local_ip = 192.169.142.57
[DATABASE]
sql_connection = mysql://neutron:fedora@node1-controller/ovs_neutron
[SECURITYGROUP]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
[agent]
[securitygroup]

I believe ... (more)

edit retag flag offensive close merge delete

1 answer

Sort by » oldest newest most voted
1

answered 2014-02-10 07:27:32 -0500

dbaxps gravatar image

I don't know what happened to MariaDB database or RDO Havana in two recent week , but in meantime :-

I had to update manually table bellow for root & nova passwords at FQDN host :-

[root@dfw01 ~(keystone_admin)]$ mysql -u root -p
Enter password:
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 35
Server version: 5.5.34-MariaDB MariaDB Server

Copyright (c) 2000, 2013, Oracle, Monty Program Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> SELECT User, Host, Password FROM mysql.user;

+----------+-------------------+-------------------------------------------+
| User     | Host              | Password                                  |
+----------+-------------------+-------------------------------------------+
| root     | localhost         | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 |
| root     | dfw01.localdomain | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 |  it's critical
| root     | 127.0.0.1         | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 |
| root     | ::1               | *E0DC09146F1310B49A34199B04274A9EED6F9EC7 |
| keystone | localhost         | *936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A |
| keystone | %                 | *936E8F7AB2E21B47F6C9A7E5D9FE14DBA2255E5A |
| glance   | localhost         | *CC67CAF178CB9A07D756302E0BBFA3B0165DFD49 |
| glance   | %                 | *CC67CAF178CB9A07D756302E0BBFA3B0165DFD49 |
| cinder   | localhost         | *028F8298C041368BA08A280AA8D1EF895CB68D5C |
| cinder   | %                 | *028F8298C041368BA08A280AA8D1EF895CB68D5C |
| neutron  | localhost         | *4DF421833991170108648F1103CD74FCB66BBE9E |
| neutron  | %                 | *03A31004769F9E4F94ECEEA61AA28D9649084839 |
| nova     | localhost         | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 |
| nova     | %                 | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 |
| nova     | dfw01.localdomain | *0BE3B501084D35F4C66DD3AC4569EAE5EA738212 |  it's critical
+----------+-------------------+-------------------------------------------+

15 rows in set (0.00 sec)

Otherwise , nothing is going to work , just "allinone" testing. When it's done , Kashyap schema goes ahead on F20 Two Node Real Cluster. I am going to file a bug regarding this updates , because I believe it should be done behind the scene. Updated and inserted rows are responsible for remote connection to controller for nova-compute and neutron-openswitch-agent services.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

Stats

Asked: 2014-02-09 06:47:13 -0500

Seen: 681 times

Last updated: Feb 10 '14