Ask Your Question
0

neutron error on second controller : Failed to bind port when create instance

asked 2018-04-05 07:42:22 -0600

vercetty92 gravatar image

updated 2018-04-05 07:45:33 -0600

Hi Folks,

I have a 2 nodes openstack cluster (10.106.1.90 & 10.106.1.99). Both nodes are controller and compute.

I use a pacemaker Virtual IP (10.106.1.101) to access the dashboard. All the services are running in active/active mode.

when the fist node own the VIP, I got no problem to create an instance. But when I move the VIP to the second controller, I can't create a new instance (no matter if I create it on the first node or the second node with --availability-zone.

I always have this error in neutron/server.log on the second controller:

ERROR neutron.plugins.ml2.managers [req-2f4a2cf5-7db8-416c-ae32-f6f3f558b898 6b2908509afa46b7bdd2706a6b30bbd6 e98b83da4c184979ba862611360ec45d - default default] Failed to bind port 0de77f5c-3bb1-439b-984e-981736968328 on host openstack-2 for vnic_type normal using segments [{'network_id': '8a70bed4-8a5b-41eb-a2bc-cff61d9836a9', 'segmentation_id': None, 'physical_network': u'provider', 'id': 'b22b146c-7402-46a2-a3b2-9443173808b6', 'network_type': u'flat'}]

I use network option 1, here are the changes I have applied in neutron.conf:

[DEFAULT]
...
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:xxxx@10.106.1.90,openstack:xxxx@10.106.1.99
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
dhcp_agents_per_network = 2
[database]
...
connection = mysql+pymysql://neutron:xxxx@10.106.1.101/neutron
[keystone_authtoken]
auth_uri = http://10.106.1.101:5000
auth_url = http://10.106.1.101:35357
memcached_servers = 10.106.1.90:11211,10.106.1.99:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = xxxxx [nova]
auth_url = http://10.106.1.101:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = xxxx
[oslo_concurrency]
...
lock_path = /var/lib/neutron/tmp
….
[oslo_messaging_rabbit]
rabbit_retry_interval=1
rabbit_retry_backoff=2
rabbit_max_retries=0
rabbit_durable_queues=true
rabbit_ha_queues=true

changes done on /etc/neutron/plugins/ml2/ml2_conf.ini:

[ml2]
...
type_drivers = flat,vlan
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
...
flat_networks = provider
[securitygroup]
...
enable_ipset = true

changes done on /etc/neutron/plugins/ml2/linuxbridge_agent.ini:

[linux_bridge]
physical_interface_mappings = provider:bond0
[vxlan]
enable_vxlan = false
[securitygroup]
...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

changes done on /etc/neutron/dhcp_agent.ini:

[DEFAULT]
...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

changes done on /etc/neutron/metadata_agent.ini:

[DEFAULT]
nova_metadata_host = 10.106.1.101
metadata_proxy_shared_secret = xxxx

The symlink /etc/neutron/plugin.ini have been created, net.bridge.bridge-nf-call-iptables and net.bridge.bridge-nf-call-ip6tables are set to "1", and b-netfilter module is loaded on both controllers.

this command have been done on each controllers:

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

Network have been created like that:

openstack network create --share --external --provider-physical-network provider --provider-network-type flat provider

subnet have been created like that:

openstack subnet create --network provider --allocation-pool start=10.106.1.102,end=10.106.1.110 --dns-nameserver 10.106.1.253 --gateway 10.106.1.254 --subnet-range 10.106.1.0/24 provider

did I missed something?

Thx a lot in advance

edit retag flag offensive close merge delete

2 answers

Sort by » oldest newest most voted
0

answered 2018-04-12 14:18:26 -0600

Can you please run crm_mon -1 or crm_mon -2 to see more details? Also, neutron agent-list might help us.

edit flag offensive delete link more
0

answered 2018-04-06 04:55:32 -0600

Rupesh Chowdary gravatar image

Check the l3_agent.log log file once and capture the infor while you creating the instance.

edit flag offensive delete link more

Comments

Hi, I don't have any l3 service or log running. l3_agent is not only present when we use network option 2? thx

vercetty92 gravatar imagevercetty92 ( 2018-04-06 08:10:35 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2018-04-05 07:42:22 -0600

Seen: 283 times

Last updated: Apr 06