Revision history [back]

click to hide/show revision 1
initial version

Network Problem Creating Instance with Single NIC Compute Node

Hello Everyone,

I'm trying to install OpenStack 1.7.2 from RDO on CentOS 7. I started following this tutorial, but since I don't have two NICs on the Compute Node, I devised this architecture:

image description

On the Controller node, I installed via packstack all the management services, including neutron-server. On the Compute Node, there are only the nova services and the neutron openvswitch agent.

The Controller Node has the following configurations:

/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges =physnet1:1000:2000
[ml2_type_gre]
[ml2_type_vxlan]
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

/etc/neutron/plugins/ml2/openvswitch_agent.ini:

[ovs]
integration_bridge = br-int
bridge_mappings =physnet1:br-ens1f1
enable_tunneling=False
[agent]
polling_interval = 2
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

/etc/sysconfig/network-scripts/ifcfg-ens1f0:

NAME="ens1f0"
DEVICE="ens1f0"
HWADDR="XX:XX:XX:XX:XX:XX"
ONBOOT="yes"

/etc/sysconfig/network-scripts/ifcfg-ens1f0:

NAME="ens1f1"
DEVICE="ens1f1"
HWADDR=XX:XX:XX:XX:XX:XX
ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-br-ex:

NAME="br-ex"
DEVICE="br-ex"
ONBOOT="yes"
NETBOOT="yes"
IPV6INIT="yes"
BOOTPROTO="static"
IPADDR="172.10.10.1"
PREFIX="24"
GATEWAY="172.10.10.193"
DNS1="172.10.10.192"
TYPE="OVSBridge"
DEVICETYPE="ovs"
DEFROUTE="yes

/etc/sysconfig/network-scripts/ifcfg-br-ens1f1:

NAME="br-ens1f1"
DEVICE="br-ens1f1"
ONBOOT=yes
NETBOOT=yes
IPV6INIT=yes
BOOTPROTO=static
IPADDR="192.168.0.1"
NETMASK="255.255.255.0"
DNS1="192.168.0.1"
DNS2="200.17.41.36"
TYPE="OVSBridge"
DEVICETYPE="ovs"

On the controller, result of ovs-vsctl show:

54dc7bbf-c954-4d81-b121-96d0d0866e08
    Bridge br-ex
        Port "ens1f0"
            Interface "ens1f0"
        Port "qg-f42cded3-35"
            Interface "qg-f42cded3-35"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge "br-ens1f1"
        Port "ens1f1"
            Interface "ens1f1"
        Port "br-ens1f1"
            Interface "br-ens1f1"
                type: internal
        Port "phy-br-ens1f1"
            Interface "phy-br-ens1f1"
                type: patch
                options: {peer="int-br-ens1f1"}
    Bridge br-int
        fail_mode: secure
        Port "int-br-ens1f1"
            Interface "int-br-ens1f1"
                type: patch
                options: {peer="phy-br-ens1f1"}
        Port "tap9ea1a119-58"
            tag: 1
            Interface "tap9ea1a119-58"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qr-7022af0e-4a"
            tag: 1
            Interface "qr-7022af0e-4a"
                type: internal
    ovs_version: "2.4.0"

On the Compute Node, I have the following settings:

/etc/neutron/plugins/ml2/openvswitch_agent.ini:

[ovs]
integration_bridge = br-int
enable_tunneling=False
[agent]
polling_interval = 2
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

/etc/sysconfig/network-scripts/ifcfg-eno1:

NAME="eno1"
DEVICE="eno1"
HWADDR="XX:XX:XX:XX:XX:XX"
ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-br-eno1:

NAME="br-eno1"
DEVICE="br-eno1"
ONBOOT=yes
NETBOOT=yes
IPV6INIT=yes
BOOTPROTO=static
IPADDR="192.168.0.2"
PREFIX="24"
GATEWAY="192.168.0.1"
DNS1="192.168.0.1"
DNS2="8.8.8.8"
TYPE="OVSBridge"
DEVICETYPE="ovs"

On the Compute Node, result of ovs-vsctl show:

b9dfa0a7-191c-42ab-9730-ed2e17e57943
    Bridge "br-eno1"
        Port "eno1"
            Interface "eno1"
        Port "phy-br-eno1"
            Interface "phy-br-eno1"
                type: patch
                options: {peer="int-br-eno1"}
        Port "br-eno1"
            Interface "br-eno1"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "int-br-eno1"
            Interface "int-br-eno1"
                type: patch
                options: {peer="phy-br-eno1"}
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.4.0"

I uploaded the cirros image and created the following networks/subnet:

# neutron net-list
+--------------------------------------+----------+------------------------------------------------------+
| id                                   | name     | subnets                                              |
+--------------------------------------+----------+------------------------------------------------------+
| 1b76abfb-e436-488d-b40d-9af4c72e4c6a | priv_net | 9f6fcf35-573d-4452-b5a9-af81af0b2d06 192.168.20.0/24 |
| da900eb8-9e2e-4f9d-99eb-66698036106a | pub_net  | 39a17dc1-67f0-4316-9f53-5e0ba352caee 192.168.2.0/24  |
+--------------------------------------+----------+------------------------------------------------------+

# neutron subnet-list
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
| id                                   | name        | cidr            | allocation_pools                                   |
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
| 9f6fcf35-573d-4452-b5a9-af81af0b2d06 | priv_subnet | 192.168.20.0/24 | {"start": "192.168.20.2", "end": "192.168.20.254"} |
| 39a17dc1-67f0-4316-9f53-5e0ba352caee | pub_subnet  | 192.168.2.0/24  | {"start": "192.168.2.6", "end": "192.168.2.10"}    |
+--------------------------------------+-------------+-----------------+----------------------------------------------------+

There is a router0, with external Network set for pub_net and an interface for priv_net. When I try to create an instance based on the cirros image, the building fails. I get the following message from /var/log/nova/nova-conductor.log at the Controller:

2016-02-16 13:02:55.143 11605 ERROR nova.scheduler.utils [req-5f908d32-dfd0-411a-a65e-738c0cde2fae 9bd8a1e731b64e6c9030f5731062628f a849aa28819e40318d660bc878d5f3f4 - - -] [instance: 986b08bd-ef8a-4249-9a74-690ec702b3af] Error from last host: computenode01 (node computenode01): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1905, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2057, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 986b08bd-ef8a-4249-9a74-690ec702b3af was re-scheduled: Binding failed for port f0aaf32b-220c-41a1-b69e-65ffad94b106, please check neutron logs for more information.\n']
2016-02-16 13:02:55.145 11605 WARNING nova.scheduler.utils [req-5f908d32-dfd0-411a-a65e-738c0cde2fae 9bd8a1e731b64e6c9030f5731062628f a849aa28819e40318d660bc878d5f3f4 - - -] Failed to compute_task_build_instances: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 986b08bd-ef8a-4249-9a74-690ec702b3af. Last exception: [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1905, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2057, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 986b08bd-ef8a-4249-9a74-690ec702b3af was re-scheduled: Binding failed for port f0aaf32b-220c-41a1-b69e-65ffad94b106, please check neutron logs for more information.\n']
2016-02-16 13:02:55.146 11605 WARNING nova.scheduler.utils [req-5f908d32-dfd0-411a-a65e-738c0cde2fae 9bd8a1e731b64e6c9030f5731062628f a849aa28819e40318d660bc878d5f3f4 - - -] [instance: 986b08bd-ef8a-4249-9a74-690ec702b3af] Setting instance to ERROR state.

Following the directions, I checked the logs and the Compute Node. The file /var/log/neutron/openvswitch-agent.log has the following:

2016-02-16 13:02:53.308 16996 INFO neutron.agent.common.ovs_lib [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Port 6d534397-5300-40a8-a43e-6b47017ae446      
2016-02-16 13:02:53.308 16996 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] port_unbound(): net_uuid None not in local_vlan_map
2016-02-16 13:02:53.313 16996 INFO neutron.agent.common.ovs_lib [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Port 7a90fd49-3eae-4c72-be1c-d974d9ae5364 not present in bridge br-int
2016-02-16 13:02:53.313 16996 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] port_unbound(): net_uuid None not in local_vlan_map
2016-02-16 13:02:53.313 16996 INFO neutron.agent.securitygroups_rpc [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Remove device filter for [u'6d534397-5300-40a8-a43e-6b47017ae446', u'7a90fd49-3eae-4c72-be1c-d974d9ae5364']
2016-02-16 13:02:54.238 16996 INFO neutron.agent.securitygroups_rpc [req-5484f652-1925-4ee5-8089-3e0fec35cf1e 9ab8a186cb2b4c239ded58ec26575e3e 4c027152274a4a9c8e2ca447dee00cef - - -] Security group member updated [u'38d43b3b-a379-4c10-832c-37cd66554650']
2016-02-16 13:02:54.456 16996 INFO neutron.agent.securitygroups_rpc [req-92399917-a23c-42aa-ac67-d76d7f1eb133 9ab8a186cb2b4c239ded58ec26575e3e 4c027152274a4a9c8e2ca447dee00cef - - -] Security group member updated [u'38d43b3b-a379-4c10-832c-37cd66554650']
2016-02-16 13:02:55.312 16996 INFO neutron.agent.common.ovs_lib [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Port f0aaf32b-220c-41a1-b69e-65ffad94b106 not present in bridge br-int
2016-02-16 13:02:55.313 16996 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] port_unbound(): net_uuid None not in local_vlan_map
2016-02-16 13:02:55.313 16996 INFO neutron.agent.securitygroups_rpc [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Remove device filter for [u'f0aaf32b-220c-41a1-b69e-65ffad94b106']

It complains about the absence of a port at the local_vlan_map. I believe it is really a neutron problem, probably related to the bridge connecting the Compute Node and the Controller. In the original tutorial, the bridge interfaces don't have IP addresses assigned. Does the assignment brake the bridging?

Network Problem Creating Instance with Single NIC Compute Node

Hello Everyone,

I'm trying to install OpenStack 1.7.2 from RDO on CentOS 7. I started following this tutorial, but since I don't have two NICs on the Compute Node, I devised this architecture:

image description

On the Controller node, I installed via packstack all the management services, including neutron-server. On the Compute Node, there are only the nova services and the neutron openvswitch agent.

The Controller Node has the following configurations:

/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
type_drivers = vlan
tenant_network_types = vlan
mechanism_drivers =openvswitch
path_mtu = 0
[ml2_type_flat]
[ml2_type_vlan]
network_vlan_ranges =physnet1:1000:2000
[ml2_type_gre]
[ml2_type_vxlan]
[ml2_type_geneve]
[securitygroup]
enable_security_group = True

/etc/neutron/plugins/ml2/openvswitch_agent.ini:

[ovs]
integration_bridge = br-int
bridge_mappings =physnet1:br-ens1f1
enable_tunneling=False
[agent]
polling_interval = 2
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

/etc/sysconfig/network-scripts/ifcfg-ens1f0:

NAME="ens1f0"
DEVICE="ens1f0"
HWADDR="XX:XX:XX:XX:XX:XX"
ONBOOT="yes"

/etc/sysconfig/network-scripts/ifcfg-ens1f0:

NAME="ens1f1"
DEVICE="ens1f1"
HWADDR=XX:XX:XX:XX:XX:XX
ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-br-ex:

NAME="br-ex"
DEVICE="br-ex"
ONBOOT="yes"
NETBOOT="yes"
IPV6INIT="yes"
BOOTPROTO="static"
IPADDR="172.10.10.1"
PREFIX="24"
GATEWAY="172.10.10.193"
DNS1="172.10.10.192"
TYPE="OVSBridge"
DEVICETYPE="ovs"
DEFROUTE="yes

/etc/sysconfig/network-scripts/ifcfg-br-ens1f1:

NAME="br-ens1f1"
DEVICE="br-ens1f1"
ONBOOT=yes
NETBOOT=yes
IPV6INIT=yes
BOOTPROTO=static
IPADDR="192.168.0.1"
NETMASK="255.255.255.0"
DNS1="192.168.0.1"
DNS2="200.17.41.36"
TYPE="OVSBridge"
DEVICETYPE="ovs"

On the controller, result of ovs-vsctl show:

54dc7bbf-c954-4d81-b121-96d0d0866e08
    Bridge br-ex
        Port "ens1f0"
            Interface "ens1f0"
        Port "qg-f42cded3-35"
            Interface "qg-f42cded3-35"
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
    Bridge "br-ens1f1"
        Port "ens1f1"
            Interface "ens1f1"
        Port "br-ens1f1"
            Interface "br-ens1f1"
                type: internal
        Port "phy-br-ens1f1"
            Interface "phy-br-ens1f1"
                type: patch
                options: {peer="int-br-ens1f1"}
    Bridge br-int
        fail_mode: secure
        Port "int-br-ens1f1"
            Interface "int-br-ens1f1"
                type: patch
                options: {peer="phy-br-ens1f1"}
        Port "tap9ea1a119-58"
            tag: 1
            Interface "tap9ea1a119-58"
                type: internal
        Port br-int
            Interface br-int
                type: internal
        Port "qr-7022af0e-4a"
            tag: 1
            Interface "qr-7022af0e-4a"
                type: internal
    ovs_version: "2.4.0"

On the Compute Node, I have the following settings:

/etc/neutron/plugins/ml2/openvswitch_agent.ini:

[ovs]
integration_bridge = br-int
enable_tunneling=False
[agent]
polling_interval = 2
l2_population = False
arp_responder = False
prevent_arp_spoofing = True
enable_distributed_routing = False
drop_flows_on_start=False
[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

/etc/sysconfig/network-scripts/ifcfg-eno1:

NAME="eno1"
DEVICE="eno1"
HWADDR="XX:XX:XX:XX:XX:XX"
ONBOOT=yes

/etc/sysconfig/network-scripts/ifcfg-br-eno1:

NAME="br-eno1"
DEVICE="br-eno1"
ONBOOT=yes
NETBOOT=yes
IPV6INIT=yes
BOOTPROTO=static
IPADDR="192.168.0.2"
PREFIX="24"
GATEWAY="192.168.0.1"
DNS1="192.168.0.1"
DNS2="8.8.8.8"
TYPE="OVSBridge"
DEVICETYPE="ovs"

On the Compute Node, result of ovs-vsctl show:

b9dfa0a7-191c-42ab-9730-ed2e17e57943
    Bridge "br-eno1"
        Port "eno1"
            Interface "eno1"
        Port "phy-br-eno1"
            Interface "phy-br-eno1"
                type: patch
                options: {peer="int-br-eno1"}
        Port "br-eno1"
            Interface "br-eno1"
                type: internal
    Bridge br-int
        fail_mode: secure
        Port "int-br-eno1"
            Interface "int-br-eno1"
                type: patch
                options: {peer="phy-br-eno1"}
        Port br-int
            Interface br-int
                type: internal
    ovs_version: "2.4.0"

I uploaded the cirros image and created the following networks/subnet:

# neutron net-list
+--------------------------------------+----------+------------------------------------------------------+
| id                                   | name     | subnets                                              |
+--------------------------------------+----------+------------------------------------------------------+
| 1b76abfb-e436-488d-b40d-9af4c72e4c6a | priv_net | 9f6fcf35-573d-4452-b5a9-af81af0b2d06 192.168.20.0/24 |
| da900eb8-9e2e-4f9d-99eb-66698036106a | pub_net  | 39a17dc1-67f0-4316-9f53-5e0ba352caee 192.168.2.0/24  |
+--------------------------------------+----------+------------------------------------------------------+

# neutron subnet-list
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
| id                                   | name        | cidr            | allocation_pools                                   |
+--------------------------------------+-------------+-----------------+----------------------------------------------------+
| 9f6fcf35-573d-4452-b5a9-af81af0b2d06 | priv_subnet | 192.168.20.0/24 | {"start": "192.168.20.2", "end": "192.168.20.254"} |
| 39a17dc1-67f0-4316-9f53-5e0ba352caee | pub_subnet  | 192.168.2.0/24  | {"start": "192.168.2.6", "end": "192.168.2.10"}    |
+--------------------------------------+-------------+-----------------+----------------------------------------------------+

There is a router0, with external Network set for pub_net and an interface for priv_net. When I try to create an instance based on the cirros image, the building fails. I get the following message from /var/log/nova/nova-conductor.log at the Controller:

2016-02-16 13:02:55.143 11605 ERROR nova.scheduler.utils [req-5f908d32-dfd0-411a-a65e-738c0cde2fae 9bd8a1e731b64e6c9030f5731062628f a849aa28819e40318d660bc878d5f3f4 - - -] [instance: 986b08bd-ef8a-4249-9a74-690ec702b3af] Error from last host: computenode01 (node computenode01): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1905, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2057, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 986b08bd-ef8a-4249-9a74-690ec702b3af was re-scheduled: Binding failed for port f0aaf32b-220c-41a1-b69e-65ffad94b106, please check neutron logs for more information.\n']
2016-02-16 13:02:55.145 11605 WARNING nova.scheduler.utils [req-5f908d32-dfd0-411a-a65e-738c0cde2fae 9bd8a1e731b64e6c9030f5731062628f a849aa28819e40318d660bc878d5f3f4 - - -] Failed to compute_task_build_instances: Exceeded maximum number of retries. Exceeded max scheduling attempts 3 for instance 986b08bd-ef8a-4249-9a74-690ec702b3af. Last exception: [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1905, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2057, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 986b08bd-ef8a-4249-9a74-690ec702b3af was re-scheduled: Binding failed for port f0aaf32b-220c-41a1-b69e-65ffad94b106, please check neutron logs for more information.\n']
2016-02-16 13:02:55.146 11605 WARNING nova.scheduler.utils [req-5f908d32-dfd0-411a-a65e-738c0cde2fae 9bd8a1e731b64e6c9030f5731062628f a849aa28819e40318d660bc878d5f3f4 - - -] [instance: 986b08bd-ef8a-4249-9a74-690ec702b3af] Setting instance to ERROR state.

Following the directions, I checked the logs and the Compute Node. The file /var/log/neutron/openvswitch-agent.log has the following:

2016-02-16 13:02:53.308 16996 INFO neutron.agent.common.ovs_lib [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Port 6d534397-5300-40a8-a43e-6b47017ae446      
2016-02-16 13:02:53.308 16996 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] port_unbound(): net_uuid None not in local_vlan_map
2016-02-16 13:02:53.313 16996 INFO neutron.agent.common.ovs_lib [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Port 7a90fd49-3eae-4c72-be1c-d974d9ae5364 not present in bridge br-int
2016-02-16 13:02:53.313 16996 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] port_unbound(): net_uuid None not in local_vlan_map
2016-02-16 13:02:53.313 16996 INFO neutron.agent.securitygroups_rpc [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Remove device filter for [u'6d534397-5300-40a8-a43e-6b47017ae446', u'7a90fd49-3eae-4c72-be1c-d974d9ae5364']
2016-02-16 13:02:54.238 16996 INFO neutron.agent.securitygroups_rpc [req-5484f652-1925-4ee5-8089-3e0fec35cf1e 9ab8a186cb2b4c239ded58ec26575e3e 4c027152274a4a9c8e2ca447dee00cef - - -] Security group member updated [u'38d43b3b-a379-4c10-832c-37cd66554650']
2016-02-16 13:02:54.456 16996 INFO neutron.agent.securitygroups_rpc [req-92399917-a23c-42aa-ac67-d76d7f1eb133 9ab8a186cb2b4c239ded58ec26575e3e 4c027152274a4a9c8e2ca447dee00cef - - -] Security group member updated [u'38d43b3b-a379-4c10-832c-37cd66554650']
2016-02-16 13:02:55.312 16996 INFO neutron.agent.common.ovs_lib [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Port f0aaf32b-220c-41a1-b69e-65ffad94b106 not present in bridge br-int
2016-02-16 13:02:55.313 16996 INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] port_unbound(): net_uuid None not in local_vlan_map
2016-02-16 13:02:55.313 16996 INFO neutron.agent.securitygroups_rpc [req-5e1837ec-6f38-4d43-9e56-9480186a2cc0 - - - - -] Remove device filter for [u'f0aaf32b-220c-41a1-b69e-65ffad94b106']

It complains about the absence of a port at the local_vlan_map. I believe it is really a neutron problem, probably related to the bridge connecting the Compute Node and the Controller. In the original tutorial, the bridge interfaces don't have IP addresses assigned. Does the assignment brake the bridging?