Ask Your Question
0

Instances fail to start failed to allocate network

asked 2018-11-29 03:47:41 -0500

apdibbo gravatar image

Hi,

All our instance creations are currently failing with the following error :

Message
Exceeded maximum number of retries. Exhausted all hosts available for retrying build failures for instance 23e172ea-a13f-4bb4-9ae4-5f694644d9c1.
Code
500
Details
File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 580, in build_instances raise exception.MaxRetriesExceeded(reason=msg)
Created
Nov. 29, 2018, 9:12 a.m.

We are running Queens on Centos 7 with KVM and OpenVSwitch, we have both vlan and vxlan tenant networks, neither of which are working.

Config files are as follows Neutron.conf (controller):

[DEFAULT]
allow_automatic_l3agent_failover=True
allow_overlapping_ips=True
auth_strategy=keystone
core_plugin=ml2
debug=True
dhcp_agents_per_network=5
dhcp_domain=
dvr_base_mac=FA:CA:FF:02:00:00
memcached_servers=keystone1.nubes.rl.ac.uk:11211,keystone2.nubes.rl.ac.uk:11211,keystone3.nubes.rl.ac.uk:11211
notification_driver=messagingv2
notify_nova_on_port_data_changes=True
notify_nova_on_port_status_changes=True
nova_url=https://openstack.nubes.rl.ac.uk:8774/v2
router_distributed=True
rpc_backend=rabbit
rpc_response_timeout=60
service_plugins=router
transport_url=rabbit://openstack:<rabbitpass>@rabbit1.nubes.rl.ac.uk:5672,openstack:<rabbitpass@rabbit2.nubes.rl.ac.uk:5672,openstack:<rabbitpass@rabbit3.nubes.rl.ac.uk:5672/
verbose=True

[cache]
backend=oslo_cache.memcache_pool
enabled=True
memcache_servers=keystone1.nubes.rl.ac.uk:11211,keystone2.nubes.rl.ac.uk:11211,keystone3.nubes.rl.ac.uk:11211

[database]
connection=mysql+pymysql://neutron:<neutrondbpass>@OpenStack.nubes.rl.ac.uk:3306/neutron
max_overflow=200
max_pool_size=200

[keystone_authtoken]
auth_plugin=password
auth_type=password
auth_uri=https://openstack.nubes.rl.ac.uk:5000
auth_url=https://openstack.nubes.rl.ac.uk:35357
password=<neutronpassword>
project_domain_name=default
project_name=service
user_domain_name=default
username=neutron

[nova]
auth_plugin=password
auth_type=password
auth_url=https://openstack.nubes.rl.ac.uk:35357
password=<novapassword>
project_domain_name=default
project_name=service
region_name=RegionOne
user_domain_name=default
username=nova

[oslo_concurrency]
lock_path=/var/lib/neutron/tmp

Neutron.conf (compute):

[DEFAULT]
auth_strategy=keystone
debug=True
rpc_backend=rabbit
rpc_response_timeout=600
transport_url=rabbit://openstack::<rabbitpass>@rabbit1.nubes.rl.ac.uk:5672,openstack::<rabbitpass>@rabbit2.nubes.rl.ac.uk:5672,openstack::<rabbitpass>@rabbit3.nubes.rl.ac.uk:5672/
verbose=True

[keystone_authtoken]
auth_plugin=password
auth_type=password
auth_uri=https://openstack.nubes.rl.ac.uk:5000
auth_url=https://openstack.nubes.rl.ac.uk:35357
password=<neutronpassword>
project_domain_name=default
project_name=service
user_domain_name=default
username=neutron

[oslo_concurency]
lock_path=/var/lib/neutron/tmp

openvswitchagent.ini (compute):

[agent]
arp_responder=True
enable_distributed_routing=True
l2_population=True
prevent_arp_spoofing=False
tunnel_types=vxlan
debug=True

[ovs]
bridge_mappings=provider:br0
local_ip=130.246.208.89

[securitygroup]
enable_security_group=True
firewall_driver=iptables_hybrid

Log files are as follows:

nova-compute (compute)

2018-11-29 09:11:39.252 4860 DEBUG nova.scheduler.client.report [req-a81f40e4-8a44-452e-82c0-99d1d11dab5d - - - - -] Instance 23e172ea-a13f-4bb4-9ae4-5f694644d9c1 allocations are unchanged:
VCPU=1,MEMORY_MB=1024,DISK_GB=10 _allocate_for_instance /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:1419
2018-11-29 09:11:39.261 4860 DEBUG nova.compute.resource_tracker [req-a81f40e4-8a44-452e-82c0-99d1d11dab5d - - - - -] Auto-correcting allocations. _update_usage_from_instance /usr/lib/python
2.7/site-packages/nova/compute/resource_tracker.py:1121
2018-11-29 09:11:39.340 4860 DEBUG nova.scheduler.client.report [req-a81f40e4-8a44-452e-82c0-99d1d11dab5d - - - - -] Instance 373ecd6a-8b10-4486-b3af-b6a4953dfb1c allocations are unchanged:
VCPU=4,MEMORY_MB=16384,DISK_GB=100 _allocate_for_instance /usr/lib/python2.7/site-packages/nova/scheduler/client/report.py:1419
2018-11-29 09:11:39.536 ...
(more)
edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2018-12-03 15:39:32 -0500

imrankhan-099 gravatar image

Could you please check and share the output of:

openstack hypervisor list.

Check if all the compute nodes are added in the hypervisor list.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2018-11-29 03:47:41 -0500

Seen: 176 times

Last updated: Nov 29 '18