Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Another "No valid host was found. There are not enough hosts available" question

Please, don't mark as duplicated. I've searched all other questions but none solved my prpoblem.

I have a Fuel / OpenStack environment. I can start instances with "admin_internal_net" network but can't do the same with "admin_floating_net".

I see the classic error "No valid host was found. There are not enough hosts available". The NOVA log sends me to the neutron log. The NEUTRON log shows me "neutron port blahblahblah not present in bridge br_int float IP "

I've tried "nova hypervisor-list" and got just "node-2.domein.tld" (all I have). "nova list" shows my instance in error state. My Fuel connection test is Ok. My Fuel Health Check CAN start instances with floating IP

What can I do?

Another "No valid host was found. There are not enough hosts available" question

Please, don't mark as duplicated. I've searched all other questions but none solved my prpoblem.

I have a Fuel / OpenStack environment. I can start instances with "admin_internal_net" network but can't do the same with "admin_floating_net".

I see the classic error "No valid host was found. There are not enough hosts available". The NOVA log sends me to the neutron log. The NEUTRON log shows me "neutron port blahblahblah not present in bridge br_int float IP "

I've tried "nova hypervisor-list" and got just "node-2.domein.tld" (all I have). "nova list" shows my instance in error state. My Fuel connection test is Ok. My Fuel Health Check CAN start instances with floating IP

What can I do?

Found explanation here https://ask.openstack.org/en/question/94660/port-not-present-in-bridge-br-int/

but since it is a Fuel automatic instalation, I don't know how to solve this.

Another "No valid host was found. There are not enough hosts available" question

Please, don't mark as duplicated. duplicate. I've searched all other questions but none solved my prpoblem.

I have a Fuel / OpenStack environment. I can start instances with "admin_internal_net" network but can't do the same with "admin_floating_net".

I see the classic error "No valid host was found. There are not enough hosts available". The NOVA log sends me to the neutron log. The NEUTRON log shows me "neutron port blahblahblah not present in bridge br_int float IP "

I've tried "nova hypervisor-list" and got just "node-2.domein.tld" (all I have). "nova list" shows my instance in error state. My Fuel connection test is Ok. My Fuel Health Check CAN start instances with floating IP

What can I do?

Found explanation here https://ask.openstack.org/en/question/94660/port-not-present-in-bridge-br-int/

but since it is a Fuel automatic instalation, I don't know how to solve this.

Another "No valid host was found. There are not enough hosts available" question

Please, don't mark as duplicate. I've searched all other questions but none solved my prpoblem.

I have a Fuel / OpenStack environment. I can start instances with "admin_internal_net" network but can't do the same with "admin_floating_net".

I see the classic error "No valid host was found. There are not enough hosts available". The NOVA log sends me to the neutron log. The NEUTRON log shows me "neutron port blahblahblah not present in bridge br_int float IP "

I've tried "nova hypervisor-list" and got just "node-2.domein.tld" (all I have). "nova list" shows my instance in error state. My Fuel connection test is Ok. My Fuel Health Check CAN start instances with floating IP

What can I do?

Found explanation here https://ask.openstack.org/en/question/94660/port-not-present-in-bridge-br-int/

but since it is a Fuel automatic instalation, I don't know how to solve this.

UPDATE 1

neutron agent list:

+--------------------+-------------------+-------+-----------+---------------------------+
| agent_type         | host              | alive | admin_s_u | binary                    |
+--------------------+-------------------+-------+-----------+---------------------------+
| Open vSwitch agent | node-2.domain.tld | :-)   | True      | neutron-openvswitch-agent |
| DHCP agent         | node-1.domain.tld | :-)   | True      | neutron-dhcp-agent        |
| Open vSwitch agent | node-1.domain.tld | :-)   | True      | neutron-openvswitch-agent |
| L3 agent           | node-1.domain.tld | :-)   | True      | neutron-l3-agent          |
| Metadata agent     | node-1.domain.tld | :-)   | True      | neutron-metadata-agent    |
+--------------------+-------------------+-------+-----------+---------------------------+

ml2_conf.ini ( CONTROLLER ):

    [ml2]
    # (ListOpt) List of network type driver entrypoints to be loaded from
    # the neutron.ml2.type_drivers namespace.
    #
    # type_drivers = local,flat,vlan,gre,vxlan,geneve
    type_drivers = local,flat,vlan,gre,vxlan
    # Example: type_drivers = flat,vlan,gre,vxlan,geneve

    # (ListOpt) Ordered list of network_types to allocate as tenant
    # networks. The default value 'local' is useful for single-box testing
    # but provides no connectivity between hosts.
    #
    # tenant_network_types = local
    tenant_network_types = flat,vlan
    # Example: tenant_network_types = vlan,gre,vxlan,geneve


    # (ListOpt) Ordered list of networking mechanism driver entrypoints
    # to be loaded from the neutron.ml2.mechanism_drivers namespace.
    # mechanism_drivers =
    mechanism_drivers =openvswitch
    # Example: mechanism_drivers = openvswitch,mlnx
    # Example: mechanism_drivers = arista
    # Example: mechanism_drivers = openvswitch,cisco_nexus,logger
    # Example: mechanism_drivers = openvswitch,brocade
    # Example: mechanism_drivers = linuxbridge,brocade

    # (ListOpt) Ordered list of extension driver entrypoints
    # to be loaded from the neutron.ml2.extension_drivers namespace.
    # extension_drivers =
    extension_drivers =port_security
    # Example: extension_drivers = anewextensiondriver

    # =========== items for MTU selection and advertisement =============
    # (IntOpt) Path MTU.  The maximum permissible size of an unfragmented
    # packet travelling from and to addresses where encapsulated Neutron
    # traffic is sent.  Drivers calculate maximum viable MTU for
    # validating tenant requests based on this value (typically,
    # path_mtu - max encap header size).  If <=0, the path MTU is
    # indeterminate and no calculation takes place.
    # path_mtu = 0
    path_mtu = 1500

    # (IntOpt) Segment MTU.  The maximum permissible size of an
    # unfragmented packet travelling a L2 network segment.  If <=0,
    # the segment MTU is indeterminate and no calculation takes place.
    # segment_mtu = 0

    # (ListOpt) Physical network MTUs.  List of mappings of physical
    # network to MTU value.  The format of the mapping is
    # <physnet>:<mtu val>.  This mapping allows specifying a
    # physical network MTU value that differs from the default
    # segment_mtu value.
    # physical_network_mtus =
    physical_network_mtus =physnet2:1500,physnet1:1500
    # Example: physical_network_mtus = physnet1:1550, physnet2:1500
    # ======== end of items for MTU selection and advertisement =========

    # (StrOpt) Default network type for external networks when no provider
    # attributes are specified. By default it is None, which means that if
    # provider attributes are not specified while creating external networks
    # then they will have the same type as tenant networks.
    # Allowed values for external_network_type config option depend on the
    # network type values configured in type_drivers config option.
    # external_network_type =
    # Example: external_network_type = local

    [ml2_type_flat]
    # (ListOpt) List of physical_network names with which flat networks
    # can be created. Use * to allow flat networks with arbitrary
    # physical_network names.
    #
    # flat_networks =
    flat_networks =*
    # Example:flat_networks = physnet1,physnet2
    # Example:flat_networks = *

    [ml2_type_vlan]
    # (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
    # specifying physical_network names usable for VLAN provider and
    # tenant networks, as well as ranges of VLAN tags on each
    # physical_network available for allocation as tenant networks.
    #
    # network_vlan_ranges =
    network_vlan_ranges =physnet2:1000:1030,physnet1
    # Example: network_vlan_ranges = physnet1:1000:2999,physnet2

    [ml2_type_gre]
    # (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
    # tunnel_id_ranges =
    tunnel_id_ranges =

    [ml2_type_vxlan]
    # (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
    # ranges of VXLAN VNI IDs that are available for tenant network allocation.
    #
    # vni_ranges =
    vni_ranges =

    # (StrOpt) Multicast group for the VXLAN interface. When configured, will
    # enable sending all broadcast traffic to this multicast group. When left
    # unconfigured, will disable multicast VXLAN mode.
    #
    # vxlan_group =
    vxlan_group =224.0.0.1
    # Example: vxlan_group = 239.1.1.1

    [ml2_type_geneve]
    # (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
    # ranges of Geneve VNI IDs that are available for tenant network allocation.
    #
    # vni_ranges =

    # (IntOpt) Geneve encapsulation header size is dynamic, this
    # value is used to calculate the maximum MTU for the driver.
    # this is the sum of the sizes of the outer ETH+IP+UDP+GENEVE
    # header sizes.
    # The default size for this field is 50, which is the size of the
    # Geneve header without any additional option headers
    #
    # max_header_size =
    # Example: max_header_size = 50 (Geneve headers with no additional options)

    [securitygroup]
    # Controls if neutron security group is enabled or not.
    # It should be false when you use nova security group.
    # enable_security_group = True
    enable_security_group = True

    # Use ipset to speed-up the iptables security groups. Enabling ipset support
    # requires that ipset is installed on L2 agent node.
    # enable_ipset = True
    firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

    [agent]
    enable_distributed_routing=False
    prevent_arp_spoofing=True
    arp_responder=False
    polling_interval=2
    drop_flows_on_start=False
    l2_population=False

    [ovs]
    enable_tunneling=False
    integration_bridge=br-int
    bridge_mappings=physnet2:br-prv,physnet1:br-floating

ml2_conf.ini ( COMPUTE ):

[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# type_drivers = local,flat,vlan,gre,vxlan,geneve
# Example: type_drivers = flat,vlan,gre,vxlan,geneve

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan,geneve


# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = openvswitch,mlnx
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = openvswitch,cisco_nexus,logger
# Example: mechanism_drivers = openvswitch,brocade
# Example: mechanism_drivers = linuxbridge,brocade

# (ListOpt) Ordered list of extension driver entrypoints
# to be loaded from the neutron.ml2.extension_drivers namespace.
# extension_drivers =
# Example: extension_drivers = anewextensiondriver

# =========== items for MTU selection and advertisement =============
# (IntOpt) Path MTU.  The maximum permissible size of an unfragmented
# packet travelling from and to addresses where encapsulated Neutron
# traffic is sent.  Drivers calculate maximum viable MTU for
# validating tenant requests based on this value (typically,
# path_mtu - max encap header size).  If <=0, the path MTU is
# indeterminate and no calculation takes place.
# path_mtu = 0

# (IntOpt) Segment MTU.  The maximum permissible size of an
# unfragmented packet travelling a L2 network segment.  If <=0,
# the segment MTU is indeterminate and no calculation takes place.
# segment_mtu = 0

# (ListOpt) Physical network MTUs.  List of mappings of physical
# network to MTU value.  The format of the mapping is
# <physnet>:<mtu val>.  This mapping allows specifying a
# physical network MTU value that differs from the default
# segment_mtu value.
# physical_network_mtus =
# Example: physical_network_mtus = physnet1:1550, physnet2:1500
# ======== end of items for MTU selection and advertisement =========

# (StrOpt) Default network type for external networks when no provider
# attributes are specified. By default it is None, which means that if
# provider attributes are not specified while creating external networks
# then they will have the same type as tenant networks.
# Allowed values for external_network_type config option depend on the
# network type values configured in type_drivers config option.
# external_network_type =
# Example: external_network_type = local

[ml2_type_flat]
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# tunnel_id_ranges =

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
# vni_ranges =

# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

[ml2_type_geneve]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of Geneve VNI IDs that are available for tenant network allocation.
#
# vni_ranges =

# (IntOpt) Geneve encapsulation header size is dynamic, this
# value is used to calculate the maximum MTU for the driver.
# this is the sum of the sizes of the outer ETH+IP+UDP+GENEVE
# header sizes.
# The default size for this field is 50, which is the size of the
# Geneve header without any additional option headers
#
# max_header_size =
# Example: max_header_size = 50 (Geneve headers with no additional options)

[securitygroup]
# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
# enable_security_group = True

# Use ipset to speed-up the iptables security groups. Enabling ipset support
# requires that ipset is installed on L2 agent node.
# enable_ipset = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[agent]
enable_distributed_routing=False
prevent_arp_spoofing=True
arp_responder=False
polling_interval=2
drop_flows_on_start=False
l2_population=False

[ovs]
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=physnet2:br-prv

Both openvswitch_agent.ini ( Compute and Controller ) are the same and have no active entry ( all comented ):

[ovs]
# Do not change this parameter unless you have a good reason to.
# This is the name of the OVS integration bridge. There is one per hypervisor.
# The integration bridge acts as a virtual "patch bay". All VM VIFs are
# attached to this bridge and then "patched" according to their network
# connectivity.
#
# integration_bridge = br-int

# Only used for the agent if tunnel_id_ranges is not empty for
# the server.  In most cases, the default value should be fine.
#
# tunnel_bridge = br-tun

# Peer patch port in integration bridge for tunnel bridge
# int_peer_patch_port = patch-tun

# Peer patch port in tunnel bridge for integration bridge
# tun_peer_patch_port = patch-int

# Uncomment this line for the agent if tunnel_id_ranges is not
# empty for the server. Set local-ip to be the local IP address of
# this hypervisor.
#
# local_ip =

# (ListOpt) Comma-separated list of <physical_network>:<bridge> tuples
# mapping physical network names to the agent's node-specific OVS
# bridge names to be used for flat and VLAN networks. The length of
# bridge names should be no more than 11. Each bridge must
# exist, and should have a physical network interface configured as a
# port. All physical networks configured on the server should have
# mappings to appropriate bridges on each agent.
#
# Note: If you remove a bridge from this mapping, make sure to disconnect it
# from the integration bridge as it won't be managed by the agent anymore.
#
# bridge_mappings =
# Example: bridge_mappings = physnet1:br-eth1

# (BoolOpt) Use veths instead of patch ports to interconnect the integration
# bridge to physical networks. Support kernel without ovs patch port support
# so long as it is set to True.
# use_veth_interconnection = False

# (StrOpt) Which OVSDB backend to use, defaults to 'vsctl'
# vsctl - The backend based on executing ovs-vsctl
# native - The backend based on using native OVSDB
# ovsdb_interface = vsctl

# (StrOpt) The connection string for the native OVSDB backend
# To enable ovsdb-server to listen on port 6640:
#   ovs-vsctl set-manager ptcp:6640:127.0.0.1
# ovsdb_connection = tcp:127.0.0.1:6640

# (StrOpt) OpenFlow interface to use.
# 'ovs-ofctl' or 'native'.
# of_interface = ovs-ofctl
#
# (IPOpt)
# Address to listen on for OpenFlow connections.
# Used only for 'native' driver.
# of_listen_address = 127.0.0.1
#
# (IntOpt)
# Port to listen on for OpenFlow connections.
# Used only for 'native' driver.
# of_listen_port = 6633
#
# (IntOpt)
# Timeout in seconds to wait for the local switch connecting the controller.
# Used only for 'native' driver.
# of_connect_timeout=30
#
# (IntOpt)
# Timeout in seconds to wait for a single OpenFlow request.
# Used only for 'native' driver.
# of_request_timeout=10

# (StrOpt) ovs datapath to use.
# 'system' is the default value and corresponds to the kernel datapath.
# To enable the userspace datapath set this value to 'netdev'
# datapath_type = system

[agent]
# Log agent heartbeats from this OVS agent
# log_agent_heartbeats = False

# Agent's polling interval in seconds
# polling_interval = 2

# Minimize polling by monitoring ovsdb for interface changes
# minimize_polling = True

# When minimize_polling = True, the number of seconds to wait before
# respawning the ovsdb monitor after losing communication with it
# ovsdb_monitor_respawn_interval = 30

# (ListOpt) The types of tenant network tunnels supported by the agent.
# Setting this will enable tunneling support in the agent. This can be set to
# either 'gre' or 'vxlan'. If this is unset, it will default to [] and
# disable tunneling support in the agent.
# You can specify as many values here as your compute hosts supports.
#
# tunnel_types =
# Example: tunnel_types = gre
# Example: tunnel_types = vxlan
# Example: tunnel_types = vxlan, gre

# (IntOpt) The port number to utilize if tunnel_types includes 'vxlan'. By
# default, this will make use of the Open vSwitch default value of '4789' if
# not specified.
#
# vxlan_udp_port =
# Example: vxlan_udp_port = 8472

# (IntOpt) This is the MTU size of veth interfaces.
# Do not change unless you have a good reason to.
# The default MTU size of veth interfaces is 1500.
# This option has no effect if use_veth_interconnection is False
# veth_mtu =
# Example: veth_mtu = 1504

# (BoolOpt) Flag to enable l2-population extension. This option should only be
# used in conjunction with ml2 plugin and l2population mechanism driver. It'll
# enable plugin to populate remote ports macs and IPs (using fdb_add/remove
# RPC calbbacks instead of tunnel_sync/update) on OVS agents in order to
# optimize tunnel management.
#
# l2_population = False

# Enable local ARP responder. Requires OVS 2.1. This is only used by the l2
# population ML2 MechanismDriver.
#
# arp_responder = False

# Enable suppression of ARP responses that don't match an IP address that
# belongs to the port from which they originate.
# Note: This prevents the VMs attached to this agent from spoofing,
# it doesn't protect them from other devices which have the capability to spoof
# (e.g. bare metal or VMs attached to agents without this flag set to True).
# Requires a version of OVS that can match ARP headers.
#
# prevent_arp_spoofing = True

# (BoolOpt) Set or un-set the don't fragment (DF) bit on outgoing IP packet
# carrying GRE/VXLAN tunnel. The default value is True.
#
# dont_fragment = True

# (BoolOpt) Set to True on L2 agents to enable support
# for distributed virtual routing.
#
# enable_distributed_routing = False

# (IntOpt) Set new timeout in seconds for new rpc calls after agent receives
# SIGTERM. If value is set to 0, rpc timeout won't be changed"
#
# quitting_rpc_timeout = 10

# (ListOpt) Extensions list to use
# Example: extensions = qos
#
# extensions =

# (BoolOpt) Set or un-set the checksum on outgoing IP packet
# carrying GRE/VXLAN tunnel. The default value is False.
#
# tunnel_csum = False

# (StrOpt) agent_type to report.
# This config entry allows configuration of the neutron agent type reported
# by the default ovs l2 agent. This allows multiple ovs mechanism drivers
# to share a common ovs agent implementation. NOTE: this value will be
# removed in the mitaka cycle.
#
# agent_type = 'Open vSwitch agent'

[securitygroup]
# Firewall driver for realizing neutron security group function.
# firewall_driver = neutron.agent.firewall.NoopFirewallDriver
# Example: firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
# enable_security_group = True

#-----------------------------------------------------------------------------
# Sample Configurations.
#-----------------------------------------------------------------------------
#
# 1. With VLANs on eth1.
# [ovs]
# integration_bridge = br-int
# bridge_mappings = default:br-eth1
#
# 2. With GRE tunneling.
# [ovs]
# integration_bridge = br-int
# tunnel_bridge = br-tun
# local_ip = 10.0.0.3
#
# 3. With VXLAN tunneling.
# [ovs]
# integration_bridge = br-int
# tunnel_bridge = br-tun
# local_ip = 10.0.0.3
# [agent]
# tunnel_types = vxlan

Rabbit status:

Status of node 'rabbit@messaging-node-1' ...
[{pid,12858},
 {running_applications,
     [{rabbitmq_management,"RabbitMQ Management Console","3.5.6"},
      {rabbitmq_management_agent,"RabbitMQ Management Agent","3.5.6"},
      {rabbit,"RabbitMQ","3.5.6"},
      {os_mon,"CPO  CXC 138 46","2.4"},
      {amqp_client,"RabbitMQ AMQP Client","3.5.6"},
      {rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.5.6"},
      {webmachine,"webmachine","1.10.3-rmq3.5.6-gite9359c7"},
      {mochiweb,"MochiMedia Web Server","2.7.0-rmq3.5.6-git680dba8"},
      {mnesia,"MNESIA  CXC 138 12","4.13.1"},
      {xmerl,"XML parser","1.3.8"},
      {inets,"INETS  CXC 138 49","6.0.1"},
      {sasl,"SASL  CXC 138 11","2.6"},
      {stdlib,"ERTS  CXC 138 10","2.6"},
      {kernel,"ERTS  CXC 138 10","4.1"}]},
 {os,{unix,linux}},
 {erlang_version,
     "Erlang/OTP 18 [erts-7.1] [source] [64-bit] [smp:2:2] [async-threads:2] [kernel-poll:true]\n"},
 {memory,
     [{total,188876976},
      {connection_readers,886568},
      {connection_writers,140000},
      {connection_channels,1113136},
      {connection_other,1232528},
      {queue_procs,4218984},
      {queue_slave_procs,0},
      {plugins,544384},
      {other_proc,13738352},
      {mnesia,481992},
      {mgmt_db,1693360},
      {msg_index,100656},
      {other_ets,1258688},
      {binary,128806312},
      {code,20000577},
      {atom,711569},
      {other_system,13949870}]},
 {alarms,[]},
 {listeners,[{clustering,41055,"::"},{amqp,5673,"192.168.0.3"}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,1824228966},
 {disk_free_limit,5000000},
 {disk_free,12253487104},
 {file_descriptors,
     [{total_limit,102300},
      {total_used,64},
      {sockets_limit,92068},
      {sockets_used,62}]},
 {processes,[{limit,1048576},{used,1152}]},
 {run_queue,0},
 {uptime,1635}]

The OVS log is too big to post here so I'll be share at my website ( direct link ):

http://cmabreu.com.br/ovs-agent.log

Another "No valid host was found. There are not enough hosts available" question

Please, don't mark as duplicate. I've searched all other questions but none solved my prpoblem.

I have a Fuel / OpenStack environment. I can start instances with "admin_internal_net" network but can't do the same with "admin_floating_net".

I see the classic error "No valid host was found. There are not enough hosts available". The NOVA log sends me to the neutron log. The NEUTRON log shows me "neutron port blahblahblah not present in bridge br_int float IP "

I've tried "nova hypervisor-list" and got just "node-2.domein.tld" (all I have). "nova list" shows my instance in error state. My Fuel connection test is Ok. My Fuel Health Check CAN start instances with floating IP

What can I do?

Found explanation here https://ask.openstack.org/en/question/94660/port-not-present-in-bridge-br-int/

but since it is a Fuel automatic instalation, I don't know how to solve this.

UPDATE 1

neutron agent list:

+--------------------+-------------------+-------+-----------+---------------------------+
| agent_type         | host              | alive | admin_s_u | binary                    |
+--------------------+-------------------+-------+-----------+---------------------------+
| Open vSwitch agent | node-2.domain.tld | :-)   | True      | neutron-openvswitch-agent |
| DHCP agent         | node-1.domain.tld | :-)   | True      | neutron-dhcp-agent        |
| Open vSwitch agent | node-1.domain.tld | :-)   | True      | neutron-openvswitch-agent |
| L3 agent           | node-1.domain.tld | :-)   | True      | neutron-l3-agent          |
| Metadata agent     | node-1.domain.tld | :-)   | True      | neutron-metadata-agent    |
+--------------------+-------------------+-------+-----------+---------------------------+

ml2_conf.ini ( CONTROLLER ):

    [ml2]
    # (ListOpt) List of network type driver entrypoints to be loaded from
    # the neutron.ml2.type_drivers namespace.
    #
    # type_drivers = local,flat,vlan,gre,vxlan,geneve
    type_drivers = local,flat,vlan,gre,vxlan
    # Example: type_drivers = flat,vlan,gre,vxlan,geneve

    # (ListOpt) Ordered list of network_types to allocate as tenant
    # networks. The default value 'local' is useful for single-box testing
    # but provides no connectivity between hosts.
    #
    # tenant_network_types = local
    tenant_network_types = flat,vlan
    # Example: tenant_network_types = vlan,gre,vxlan,geneve


    # (ListOpt) Ordered list of networking mechanism driver entrypoints
    # to be loaded from the neutron.ml2.mechanism_drivers namespace.
    # mechanism_drivers =
    mechanism_drivers =openvswitch
    # Example: mechanism_drivers = openvswitch,mlnx
    # Example: mechanism_drivers = arista
    # Example: mechanism_drivers = openvswitch,cisco_nexus,logger
    # Example: mechanism_drivers = openvswitch,brocade
    # Example: mechanism_drivers = linuxbridge,brocade

    # (ListOpt) Ordered list of extension driver entrypoints
    # to be loaded from the neutron.ml2.extension_drivers namespace.
    # extension_drivers =
    extension_drivers =port_security
    # Example: extension_drivers = anewextensiondriver

    # =========== items for MTU selection and advertisement =============
    # (IntOpt) Path MTU.  The maximum permissible size of an unfragmented
    # packet travelling from and to addresses where encapsulated Neutron
    # traffic is sent.  Drivers calculate maximum viable MTU for
    # validating tenant requests based on this value (typically,
    # path_mtu - max encap header size).  If <=0, the path MTU is
    # indeterminate and no calculation takes place.
    # path_mtu = 0
    path_mtu = 1500

    # (IntOpt) Segment MTU.  The maximum permissible size of an
    # unfragmented packet travelling a L2 network segment.  If <=0,
    # the segment MTU is indeterminate and no calculation takes place.
    # segment_mtu = 0

    # (ListOpt) Physical network MTUs.  List of mappings of physical
    # network to MTU value.  The format of the mapping is
    # <physnet>:<mtu val>.  This mapping allows specifying a
    # physical network MTU value that differs from the default
    # segment_mtu value.
    # physical_network_mtus =
    physical_network_mtus =physnet2:1500,physnet1:1500
    # Example: physical_network_mtus = physnet1:1550, physnet2:1500
    # ======== end of items for MTU selection and advertisement =========

    # (StrOpt) Default network type for external networks when no provider
    # attributes are specified. By default it is None, which means that if
    # provider attributes are not specified while creating external networks
    # then they will have the same type as tenant networks.
    # Allowed values for external_network_type config option depend on the
    # network type values configured in type_drivers config option.
    # external_network_type =
    # Example: external_network_type = local

    [ml2_type_flat]
    # (ListOpt) List of physical_network names with which flat networks
    # can be created. Use * to allow flat networks with arbitrary
    # physical_network names.
    #
    # flat_networks =
    flat_networks =*
    # Example:flat_networks = physnet1,physnet2
    # Example:flat_networks = *

    [ml2_type_vlan]
    # (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
    # specifying physical_network names usable for VLAN provider and
    # tenant networks, as well as ranges of VLAN tags on each
    # physical_network available for allocation as tenant networks.
    #
    # network_vlan_ranges =
    network_vlan_ranges =physnet2:1000:1030,physnet1
    # Example: network_vlan_ranges = physnet1:1000:2999,physnet2

    [ml2_type_gre]
    # (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
    # tunnel_id_ranges =
    tunnel_id_ranges =

    [ml2_type_vxlan]
    # (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
    # ranges of VXLAN VNI IDs that are available for tenant network allocation.
    #
    # vni_ranges =
    vni_ranges =

    # (StrOpt) Multicast group for the VXLAN interface. When configured, will
    # enable sending all broadcast traffic to this multicast group. When left
    # unconfigured, will disable multicast VXLAN mode.
    #
    # vxlan_group =
    vxlan_group =224.0.0.1
    # Example: vxlan_group = 239.1.1.1

    [ml2_type_geneve]
    # (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
    # ranges of Geneve VNI IDs that are available for tenant network allocation.
    #
    # vni_ranges =

    # (IntOpt) Geneve encapsulation header size is dynamic, this
    # value is used to calculate the maximum MTU for the driver.
    # this is the sum of the sizes of the outer ETH+IP+UDP+GENEVE
    # header sizes.
    # The default size for this field is 50, which is the size of the
    # Geneve header without any additional option headers
    #
    # max_header_size =
    # Example: max_header_size = 50 (Geneve headers with no additional options)

    [securitygroup]
    # Controls if neutron security group is enabled or not.
    # It should be false when you use nova security group.
    # enable_security_group = True
    enable_security_group = True

    # Use ipset to speed-up the iptables security groups. Enabling ipset support
    # requires that ipset is installed on L2 agent node.
    # enable_ipset = True
    firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

    [agent]
    enable_distributed_routing=False
    prevent_arp_spoofing=True
    arp_responder=False
    polling_interval=2
    drop_flows_on_start=False
    l2_population=False

    [ovs]
    enable_tunneling=False
    integration_bridge=br-int
    bridge_mappings=physnet2:br-prv,physnet1:br-floating

ml2_conf.ini ( COMPUTE ):

[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# type_drivers = local,flat,vlan,gre,vxlan,geneve
# Example: type_drivers = flat,vlan,gre,vxlan,geneve

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# tenant_network_types = local
# Example: tenant_network_types = vlan,gre,vxlan,geneve


# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# mechanism_drivers =
# Example: mechanism_drivers = openvswitch,mlnx
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = openvswitch,cisco_nexus,logger
# Example: mechanism_drivers = openvswitch,brocade
# Example: mechanism_drivers = linuxbridge,brocade

# (ListOpt) Ordered list of extension driver entrypoints
# to be loaded from the neutron.ml2.extension_drivers namespace.
# extension_drivers =
# Example: extension_drivers = anewextensiondriver

# =========== items for MTU selection and advertisement =============
# (IntOpt) Path MTU.  The maximum permissible size of an unfragmented
# packet travelling from and to addresses where encapsulated Neutron
# traffic is sent.  Drivers calculate maximum viable MTU for
# validating tenant requests based on this value (typically,
# path_mtu - max encap header size).  If <=0, the path MTU is
# indeterminate and no calculation takes place.
# path_mtu = 0

# (IntOpt) Segment MTU.  The maximum permissible size of an
# unfragmented packet travelling a L2 network segment.  If <=0,
# the segment MTU is indeterminate and no calculation takes place.
# segment_mtu = 0

# (ListOpt) Physical network MTUs.  List of mappings of physical
# network to MTU value.  The format of the mapping is
# <physnet>:<mtu val>.  This mapping allows specifying a
# physical network MTU value that differs from the default
# segment_mtu value.
# physical_network_mtus =
# Example: physical_network_mtus = physnet1:1550, physnet2:1500
# ======== end of items for MTU selection and advertisement =========

# (StrOpt) Default network type for external networks when no provider
# attributes are specified. By default it is None, which means that if
# provider attributes are not specified while creating external networks
# then they will have the same type as tenant networks.
# Allowed values for external_network_type config option depend on the
# network type values configured in type_drivers config option.
# external_network_type =
# Example: external_network_type = local

[ml2_type_flat]
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
# (ListOpt) List of <physical_network>[:<vlan_min>:<vlan_max>] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
# (ListOpt) Comma-separated list of <tun_min>:<tun_max> tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
# tunnel_id_ranges =

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
# vni_ranges =

# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

[ml2_type_geneve]
# (ListOpt) Comma-separated list of <vni_min>:<vni_max> tuples enumerating
# ranges of Geneve VNI IDs that are available for tenant network allocation.
#
# vni_ranges =

# (IntOpt) Geneve encapsulation header size is dynamic, this
# value is used to calculate the maximum MTU for the driver.
# this is the sum of the sizes of the outer ETH+IP+UDP+GENEVE
# header sizes.
# The default size for this field is 50, which is the size of the
# Geneve header without any additional option headers
#
# max_header_size =
# Example: max_header_size = 50 (Geneve headers with no additional options)

[securitygroup]
# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
# enable_security_group = True

# Use ipset to speed-up the iptables security groups. Enabling ipset support
# requires that ipset is installed on L2 agent node.
# enable_ipset = True
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[agent]
enable_distributed_routing=False
prevent_arp_spoofing=True
arp_responder=False
polling_interval=2
drop_flows_on_start=False
l2_population=False

[ovs]
enable_tunneling=False
integration_bridge=br-int
bridge_mappings=physnet2:br-prv

Both openvswitch_agent.ini ( Compute and Controller ) are the same and have no active entry ( all comented ):

[ovs]
# Do not change this parameter unless you have a good reason to.
# This is the name of the OVS integration bridge. There is one per hypervisor.
# The integration bridge acts as a virtual "patch bay". All VM VIFs are
# attached to this bridge and then "patched" according to their network
# connectivity.
#
# integration_bridge = br-int

# Only used for the agent if tunnel_id_ranges is not empty for
# the server.  In most cases, the default value should be fine.
#
# tunnel_bridge = br-tun

# Peer patch port in integration bridge for tunnel bridge
# int_peer_patch_port = patch-tun

# Peer patch port in tunnel bridge for integration bridge
# tun_peer_patch_port = patch-int

# Uncomment this line for the agent if tunnel_id_ranges is not
# empty for the server. Set local-ip to be the local IP address of
# this hypervisor.
#
# local_ip =

# (ListOpt) Comma-separated list of <physical_network>:<bridge> tuples
# mapping physical network names to the agent's node-specific OVS
# bridge names to be used for flat and VLAN networks. The length of
# bridge names should be no more than 11. Each bridge must
# exist, and should have a physical network interface configured as a
# port. All physical networks configured on the server should have
# mappings to appropriate bridges on each agent.
#
# Note: If you remove a bridge from this mapping, make sure to disconnect it
# from the integration bridge as it won't be managed by the agent anymore.
#
# bridge_mappings =
# Example: bridge_mappings = physnet1:br-eth1

# (BoolOpt) Use veths instead of patch ports to interconnect the integration
# bridge to physical networks. Support kernel without ovs patch port support
# so long as it is set to True.
# use_veth_interconnection = False

# (StrOpt) Which OVSDB backend to use, defaults to 'vsctl'
# vsctl - The backend based on executing ovs-vsctl
# native - The backend based on using native OVSDB
# ovsdb_interface = vsctl

# (StrOpt) The connection string for the native OVSDB backend
# To enable ovsdb-server to listen on port 6640:
#   ovs-vsctl set-manager ptcp:6640:127.0.0.1
# ovsdb_connection = tcp:127.0.0.1:6640

# (StrOpt) OpenFlow interface to use.
# 'ovs-ofctl' or 'native'.
# of_interface = ovs-ofctl
#
# (IPOpt)
# Address to listen on for OpenFlow connections.
# Used only for 'native' driver.
# of_listen_address = 127.0.0.1
#
# (IntOpt)
# Port to listen on for OpenFlow connections.
# Used only for 'native' driver.
# of_listen_port = 6633
#
# (IntOpt)
# Timeout in seconds to wait for the local switch connecting the controller.
# Used only for 'native' driver.
# of_connect_timeout=30
#
# (IntOpt)
# Timeout in seconds to wait for a single OpenFlow request.
# Used only for 'native' driver.
# of_request_timeout=10

# (StrOpt) ovs datapath to use.
# 'system' is the default value and corresponds to the kernel datapath.
# To enable the userspace datapath set this value to 'netdev'
# datapath_type = system

[agent]
# Log agent heartbeats from this OVS agent
# log_agent_heartbeats = False

# Agent's polling interval in seconds
# polling_interval = 2

# Minimize polling by monitoring ovsdb for interface changes
# minimize_polling = True

# When minimize_polling = True, the number of seconds to wait before
# respawning the ovsdb monitor after losing communication with it
# ovsdb_monitor_respawn_interval = 30

# (ListOpt) The types of tenant network tunnels supported by the agent.
# Setting this will enable tunneling support in the agent. This can be set to
# either 'gre' or 'vxlan'. If this is unset, it will default to [] and
# disable tunneling support in the agent.
# You can specify as many values here as your compute hosts supports.
#
# tunnel_types =
# Example: tunnel_types = gre
# Example: tunnel_types = vxlan
# Example: tunnel_types = vxlan, gre

# (IntOpt) The port number to utilize if tunnel_types includes 'vxlan'. By
# default, this will make use of the Open vSwitch default value of '4789' if
# not specified.
#
# vxlan_udp_port =
# Example: vxlan_udp_port = 8472

# (IntOpt) This is the MTU size of veth interfaces.
# Do not change unless you have a good reason to.
# The default MTU size of veth interfaces is 1500.
# This option has no effect if use_veth_interconnection is False
# veth_mtu =
# Example: veth_mtu = 1504

# (BoolOpt) Flag to enable l2-population extension. This option should only be
# used in conjunction with ml2 plugin and l2population mechanism driver. It'll
# enable plugin to populate remote ports macs and IPs (using fdb_add/remove
# RPC calbbacks instead of tunnel_sync/update) on OVS agents in order to
# optimize tunnel management.
#
# l2_population = False

# Enable local ARP responder. Requires OVS 2.1. This is only used by the l2
# population ML2 MechanismDriver.
#
# arp_responder = False

# Enable suppression of ARP responses that don't match an IP address that
# belongs to the port from which they originate.
# Note: This prevents the VMs attached to this agent from spoofing,
# it doesn't protect them from other devices which have the capability to spoof
# (e.g. bare metal or VMs attached to agents without this flag set to True).
# Requires a version of OVS that can match ARP headers.
#
# prevent_arp_spoofing = True

# (BoolOpt) Set or un-set the don't fragment (DF) bit on outgoing IP packet
# carrying GRE/VXLAN tunnel. The default value is True.
#
# dont_fragment = True

# (BoolOpt) Set to True on L2 agents to enable support
# for distributed virtual routing.
#
# enable_distributed_routing = False

# (IntOpt) Set new timeout in seconds for new rpc calls after agent receives
# SIGTERM. If value is set to 0, rpc timeout won't be changed"
#
# quitting_rpc_timeout = 10

# (ListOpt) Extensions list to use
# Example: extensions = qos
#
# extensions =

# (BoolOpt) Set or un-set the checksum on outgoing IP packet
# carrying GRE/VXLAN tunnel. The default value is False.
#
# tunnel_csum = False

# (StrOpt) agent_type to report.
# This config entry allows configuration of the neutron agent type reported
# by the default ovs l2 agent. This allows multiple ovs mechanism drivers
# to share a common ovs agent implementation. NOTE: this value will be
# removed in the mitaka cycle.
#
# agent_type = 'Open vSwitch agent'

[securitygroup]
# Firewall driver for realizing neutron security group function.
# firewall_driver = neutron.agent.firewall.NoopFirewallDriver
# Example: firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

# Controls if neutron security group is enabled or not.
# It should be false when you use nova security group.
# enable_security_group = True

#-----------------------------------------------------------------------------
# Sample Configurations.
#-----------------------------------------------------------------------------
#
# 1. With VLANs on eth1.
# [ovs]
# integration_bridge = br-int
# bridge_mappings = default:br-eth1
#
# 2. With GRE tunneling.
# [ovs]
# integration_bridge = br-int
# tunnel_bridge = br-tun
# local_ip = 10.0.0.3
#
# 3. With VXLAN tunneling.
# [ovs]
# integration_bridge = br-int
# tunnel_bridge = br-tun
# local_ip = 10.0.0.3
# [agent]
# tunnel_types = vxlan

Rabbit status:

Status of node 'rabbit@messaging-node-1' ...
[{pid,12858},
 {running_applications,
     [{rabbitmq_management,"RabbitMQ Management Console","3.5.6"},
      {rabbitmq_management_agent,"RabbitMQ Management Agent","3.5.6"},
      {rabbit,"RabbitMQ","3.5.6"},
      {os_mon,"CPO  CXC 138 46","2.4"},
      {amqp_client,"RabbitMQ AMQP Client","3.5.6"},
      {rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.5.6"},
      {webmachine,"webmachine","1.10.3-rmq3.5.6-gite9359c7"},
      {mochiweb,"MochiMedia Web Server","2.7.0-rmq3.5.6-git680dba8"},
      {mnesia,"MNESIA  CXC 138 12","4.13.1"},
      {xmerl,"XML parser","1.3.8"},
      {inets,"INETS  CXC 138 49","6.0.1"},
      {sasl,"SASL  CXC 138 11","2.6"},
      {stdlib,"ERTS  CXC 138 10","2.6"},
      {kernel,"ERTS  CXC 138 10","4.1"}]},
 {os,{unix,linux}},
 {erlang_version,
     "Erlang/OTP 18 [erts-7.1] [source] [64-bit] [smp:2:2] [async-threads:2] [kernel-poll:true]\n"},
 {memory,
     [{total,188876976},
      {connection_readers,886568},
      {connection_writers,140000},
      {connection_channels,1113136},
      {connection_other,1232528},
      {queue_procs,4218984},
      {queue_slave_procs,0},
      {plugins,544384},
      {other_proc,13738352},
      {mnesia,481992},
      {mgmt_db,1693360},
      {msg_index,100656},
      {other_ets,1258688},
      {binary,128806312},
      {code,20000577},
      {atom,711569},
      {other_system,13949870}]},
 {alarms,[]},
 {listeners,[{clustering,41055,"::"},{amqp,5673,"192.168.0.3"}]},
 {vm_memory_high_watermark,0.4},
 {vm_memory_limit,1824228966},
 {disk_free_limit,5000000},
 {disk_free,12253487104},
 {file_descriptors,
     [{total_limit,102300},
      {total_used,64},
      {sockets_limit,92068},
      {sockets_used,62}]},
 {processes,[{limit,1048576},{used,1152}]},
 {run_queue,0},
 {uptime,1635}]

The OVS log is too big to post here so I'll be share at my website ( direct link ):

http://cmabreu.com.br/ovs-agent.log

and I have this error on fuel master:

image description