Ask Your Question
0

net_uuid None not managed by VLAN manager

asked 2018-03-01 10:16:47 -0500

jonH gravatar image

updated 2018-03-05 12:13:30 -0500

We are trying to setup an instance of OpenStack, but after doing the install with the help of packstack --allinone on our centos controller and cloudbase.it on our Windows 2016 Hyper-v hypervisor, I am unable to provision an instance. The instance seems to spawn, but then fail with "net_uuid None not managed by VLAN manager". Here is the relevant configs and logs. I'm sure I've made a mistake in my configs, please help me find it.

Thanks in advace

Controller

  • CentOS Linux release 7.4.1708 (Core)
  • openstack --version: openstack 3.12.0
  • nova --version: 9.1.1
  • neutron --version: 6.5.0

Hypervisor

  • Windows Version: Microsoft Windows Server 2016 Datacenter
  • Hyper-V Version: 10.0.14393.0
  • Nova Version: HyperVNovaCompute_Pike_16_0_0_1
  • neutron.exe --version: 6.5.0
  • nova.exe --version: 9.1.0
  • Cloudbase Open vSwitch Extension: 2.7.4

controller: /etc/neutron/neutron.conf:

[DEFAULT]
bind_host=0.0.0.0
auth_strategy=keystone
core_plugin=neutron.plugins.ml2.plugin.Ml2Plugin
service_plugins=router,metering
allow_overlapping_ips=True
notify_nova_on_port_status_changes=True
notify_nova_on_port_data_changes=True
api_workers=4
rpc_workers=4
dhcp_agents_per_network = 2
router_scheduler_driver=neutron.scheduler.l3_agent_scheduler.ChanceScheduler
l3_ha=False
max_l3_agents_per_router=3
debug=False
log_dir=/var/log/neutron
transport_url=rabbit://guest:guest@{CONTROLLER_IP}:5672/
control_exchange=neutron
[agent]
root_helper=sudo neutron-rootwrap /etc/neutron/rootwrap.conf
[cors]
[database]
min_pool_size = 5
max_pool_size = 50
max_overflow = 50
connection=mysql+pymysql://neutron:3fbab8c2642c435b@{CONTROLLER_IP}/neutron
[keystone_authtoken]
auth_uri=http://{CONTROLLER_IP}:5000/
auth_type=password
auth_url=http://{CONTROLLER_IP}:35357
username=neutron
password=f3d2459d3f03496e
user_domain_name=Default
project_name=services
project_domain_name=Default
[matchmaker_redis]
[nova]
region_name=RegionOne
auth_url=http://{CONTROLLER_IP}:35357
auth_type=password
password=9cdcd2dcc4ee4286
project_domain_id=default
project_domain_name=Default
project_name=services
tenant_name=services
user_domain_id=default
user_domain_name=Default
username=nova
[oslo_concurrency]
lock_path=$state_path/lock
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
ssl=False
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
policy_file=/etc/neutron/policy.json
[quotas]
[ssl]
[service_providers]

controller: /etc/neutron/plugins/ml2/ml2_conf.ini

[DEFAULT]
[l2pop]
[ml2]
type_drivers=vxlan,flat
tenant_network_types=vxlan
mechanism_drivers=openvswitch
extension_drivers=port_security
path_mtu=0
[ml2_type_flat]
flat_networks=*
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
network_vlan_ranges = physnet1:1000:2999
[ml2_type_vxlan]
vni_ranges=10:100
vxlan_group=224.0.0.1
[securitygroup]
firewall_driver=neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group=True

controller: /etc/nova/nova.conf

[DEFAULT]
instance_usage_audit_period=hour
rootwrap_config=/etc/nova/rootwrap.conf
compute_driver=libvirt.LibvirtDriver
allow_resize_to_same_host=True
vif_plugging_is_fatal=True
vif_plugging_timeout=300
force_raw_images=True
reserved_host_memory_mb=512
cpu_allocation_ratio=16.0
ram_allocation_ratio=1.5
instance_usage_audit=True
block_device_allocate_retries=900
heal_instance_info_cache_interval=60
force_snat_range=0.0.0.0/0
metadata_host={CONTROLLER_IP}
dhcp_domain=novalocal
firewall_driver=nova.virt.firewall.NoopFirewallDriver
state_path=/var/lib/nova
report_interval=10
service_down_time=60
enabled_apis=osapi_compute,metadata
osapi_compute_listen=0.0.0.0
osapi_compute_listen_port=8774
osapi_compute_workers=4
metadata_listen=0.0.0.0
metadata_listen_port=8775
metadata_workers=4
debug=true
log_dir=/var/log/nova
transport_url=rabbit://guest:guest@{CONTROLLER_IP}:5672/
image_service=nova.image.glance.GlanceImageService
osapi_volume_listen=0.0.0.0
volume_api_class=nova.volume.cinder.API
[api]
auth_strategy=keystone
use_forwarded_for=False
fping_path=/usr/sbin/fping
[api_database]
connection=mysql+pymysql://nova_api:7cec2b9eae684374@{CONTROLLER_IP}/nova_api
[barbican]
[cache]
[cells]
[cinder]
catalog_info=volumev2:cinderv2:publicURL
[compute]
[conductor]
workers=4
[console]
[consoleauth]
[cors]
[crypto]
[database]
connection=mysql+pymysql://nova:7cec2b9eae684374@{CONTROLLER_IP}/nova
[ephemeral_storage_encryption]
[filter_scheduler]
host_subset_size=1
max_io_ops_per_host=8
max_instances_per_host=50
available_filters=nova.scheduler.filters ...
(more)
edit retag flag offensive close merge delete

Comments

Perhaps ML2 is confused by the line network_vlan_ranges = physnet1:1000:2999 in its config file. After all, there are no VLANs.

Bernd Bausch gravatar imageBernd Bausch ( 2018-03-06 03:47:35 -0500 )edit

And perhaps the neutron server log has more info, too.

Note that the compute log error is before the openvswitch agent message about the missing port - are they really related?

Bernd Bausch gravatar imageBernd Bausch ( 2018-03-06 03:53:34 -0500 )edit

Thanks for the idea, but I tried removing the "network_vlan_ranges" line from ML2 config, but it didn't change anything.

jonH gravatar imagejonH ( 2018-03-06 10:05:30 -0500 )edit

I looked in the Neutron logs, nothing interesting in the server.log, but the openvswitch-agent.log I see:

{truncated} Port f1f234eb-25a4-4f01-946c-69e7e60ae77d not present in bridge br-int
{truncated} port_unbound(): net_uuid None not managed by VLAN manager
jonH gravatar imagejonH ( 2018-03-06 10:07:11 -0500 )edit

Was this ever resolved? I am having the same issue.

johnwc gravatar imagejohnwc ( 2019-02-07 21:42:13 -0500 )edit

2 answers

Sort by » oldest newest most voted
0

answered 2018-03-05 11:06:31 -0500

SamYaple gravatar image

Is the network you are attaching to a vlan, vxlan, gre, flat?

If possible, please provide the openstack network show <network> output

edit flag offensive delete link more

Comments

Thanks for the reply. The network type is vxlan

openstack network show 0d3820ca-3882-4ea3-b2fc-824725f3f873 : https://pastebin.com/raw/KacvsyMK

jonH gravatar imagejonH ( 2018-03-05 12:12:22 -0500 )edit

What version of OpenStack are you running?

The main issue appears to be Port 663ed468-78ab-4bec-9371-e5f2d98b2755 not present in bridge br-int

I would bet there are some additional errors somewhere about why this broke (perhaps openvswitch logs or even syslog/dmesg).

SamYaple gravatar imageSamYaple ( 2018-03-05 13:25:36 -0500 )edit

Controller

openstack --version: openstack 3.12.0

how do I add it to the bridge? Don't forget this is windows (well the hypervisor side is windows, the controller is linux). I don't see anything helpful in eventlog

jonH gravatar imagejonH ( 2018-03-05 14:22:53 -0500 )edit

That is the version of the openstack cli tool, not the version of openstack services. Are you deploying mitaka/pike/queens?

SamYaple gravatar imageSamYaple ( 2018-03-06 09:16:52 -0500 )edit

sorry... The versions of all the components are at the top of the post, but it's pike

  • nova --version: 9.1.1
  • neutron --version: 6.5.0
  • Cloudbase Open vSwitch Extension: 2.7.4
jonH gravatar imagejonH ( 2018-03-06 09:51:49 -0500 )edit
0

answered 2019-05-30 04:05:43 -0500

Hi,

The OP may have already solved this but here are some notes.

The error that's being pointed out ("net_uuid None not managed by VLAN manager") is a bit misleading as it occurs when neutron tries to unbind the port, which is part of the failed instance cleanup process. If the port never actually got created, such errors may show up but the main goal would be to figure out why the port could not be bound in the first place.

In this case, you should check the neutron controller logs, which refuses to bind the port to this compute host. That could happen for multiple reasons, such as: * when using the Neutron Hyper-V Agent (different than the ovs one - uses Hyper-V switches directly), you'd need networking-hyperv installed on the Neutron controller side as it provides the "hyperv" ML2 mechanism driver, which would also have to be enabled in ml2.conf. It's not the case here as you're using the ovs agent. * invalid bridge mappings - not the case here * unavailable neutron agent

Regards, Lucian Petrut

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2018-03-01 10:16:47 -0500

Seen: 1,838 times

Last updated: May 30 '19