PortBindingFailed on All-in-One setup?

asked 2018-06-20 10:47:58 -0500

yascho gravatar image

Hello, I'm trying to install OpenStack on a single node from source. I can create instances and local networks so far:

openstack server create --flavor m1.nano --image cirros --security-group default instance42
openstack --os-region-name=RegionOne network create net1
openstack subnet create --network net1 --allocation-pool start=203.0.113.101,end=203.0.113.250 --dns-nameserver 8.8.4.4 --gateway 203.0.113.1 --subnet-range 203.0.113.0/24 net1
openstack port create --network net1 port1

However, if I try set the host for a port, the command -

openstack port set --host controller port1

crashes with the following error message (/var/log/neutron/neutron-server.log):

 ERROR neutron.plugins.ml2.managers [req-50c89b31-d5fb-46a1-8e09-46c054a69d39 77451ce05dd64297bacee951bc285360 2cdfc024b9494abeba71c407efc3ac6a - default default] Failed to bind port 6208d0ae-f8bb-4dde-bd42-c4eeba2d6ba0 on host controller for vnic_type normal using segments [{'id': '6380e4c0-399d-4864-a015-b4b4a9577ef7', 'segmentation_id': None, 'network_id': '6fd74b5f-2e6f-452d-8832-411229b37fa3', 'physical_network': None, 'network_type': 'local'}]

Why does this error occur? Is there anything I have to consider running OpenStack on a single node?

Thanks

///////// config-files/service status /////////

Running services:

nova-consoleauth | controller | internal | enabled | up    
nova-scheduler   | controller | internal | enabled | up    
nova-conductor   | controller | internal | enabled | up    
nova-compute     | controller | nova     | enabled | up  

Open vSwitch agent | controller | None              | :-)   | UP    | neutron-openvswitch-agent 
Metadata agent     | controller | None              | :-)   | UP    | neutron-metadata-agent   
DHCP agent         | controller | nova              | :-)   | UP    | neutron-dhcp-agent

/etc/neutron/neutron.conf:

[DEFAULT]
core_plugin=ml2
service_plugins =
transport_url = rabbit://openstack:password@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
lock_path = /var/lib/neutron/tmp/
state_path = /var/lib/neutron
bind_host = 0.0.0.0
bind_port = 9696
dns_domain = controller
dhcp_agent_notification = true
host = controller
api_paste_config = api-paste.ini
wsgi_keep_alive = true

[agent]
availability_zone = nova

[database]
connection = mysql+pymysql://neutron:password@controller/neutron

[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = password

[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = password

[oslo_concurrency]
lock_path = /var/lib/neutron/tmp/

/etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
type_drivers = local
tenant_network_types = local
mechanism_drivers = openvswitch

[ml2_type_flat]
flat_networks = public,

[ml2_type_vlan]
network_vlan_ranges = public

[agent]
tunnel_types = vxlan
root_helper_daemon = sudo /usr/local/bin/neutron-rootwrap-daemon /etc/neutron/rootwrap.conf
root_helper = sudo /usr/local/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[ovs]
datapath_type = system
tenant_network_type = local
integration_bridge = br-int
tunnel_bridge = br-tun
int_peer_patch_port = patch-tun
tun_peer_patch_port = patch-int
local_ip = 127.0.0.1
bridge_mappings = public:br-int

/etc/neutron/plugins/ml2/openvswitch_agent.ini:

[DEFAULT]
debug=true

[agent]
extensions =

[ovs]
integration_bridge = br-int
tunnel_bridge = br-tun
int_peer_patch_port = patch-tun
tun_peer_patch_port = patch-int
local_ip = 127.0.0.1
bridge_mappings = public:br-int

[securitygroup]
firewall_driver=neutron.agent.firewall.NoopFirewallDriver

ovs-vsctl show:

Manager "ptcp:6640:127.0.0.1"
    is_connected: true
Bridge br-int
    Controller "tcp:127.0.0.1:6633"
        is_connected: true
    fail_mode: secure
    Port int-br-int
        Interface int-br-int
            type: patch
            options: {peer=phy-br-int}
    Port br-int
        Interface br-int
            type: internal
    Port patch-tun
        Interface patch-tun
            type: patch
            options: {peer=patch-int}
    Port phy-br-int
        Interface phy-br-int
            type: patch
            options: {peer=int-br-int}
Bridge br-ex
    Port "eth0"
        Interface "eth0"
    Port br-ex
        Interface br-ex
            type: internal
    Port phy-br-ex
        Interface phy-br-ex
            type: patch
            options: {peer=tun-br-ex}
Bridge br-tun
    Port tun-br-ex
        Interface tun-br-ex
            type: patch
            options: {peer=phy-br-ex}
    Port patch-int
        Interface patch-int
            type: patch
            options: {peer=patch-tun ...
(more)
edit retag flag offensive close merge delete

Comments

A thought (I don't have a running cloud for checking): I could imagine that not too much work goes into testing and debugging the local ML2 type driver; can you try vxlan or gre as tenant network driver?

Bernd Bausch gravatar imageBernd Bausch ( 2018-06-25 17:24:31 -0500 )edit