Ask Your Question
2

Spawning new instances fails to allocate the network

asked 2014-12-31 04:19:35 -0500

AlonFre gravatar image

updated 2014-12-31 10:54:33 -0500

smaffulli gravatar image

Hi,

we are working on a new lab according to the "OpenStack Installation Guide for Ubuntu 14.04 - juno". when trying to create a new instance:

root@ubuntuServer2:~# nova boot --flavor m1.tiny --image cirros-0.3.3-x86_64 --nic net-id=852ad951-c30c-490d-9bc7-983732b72a0b --security-group default --key-name demo-key demo-instance1

we get the foloowing error (after 2-3 minutes):

root@ubuntuServer2:~# nova list
+--------------------------------------+----------------+--------+------------+-------------+----------+
| ID                                   | Name           | Status | Task State | Power State | Networks |
+--------------------------------------+----------------+--------+------------+-------------+----------+
| e2e5d915-cc9e-4309-802f-b03632ffd572 | demo-instance1 | ERROR  | spawning   | NOSTATE     |          |
+--------------------------------------+----------------+--------+------------+-------------+----------+

the error on horizon web interface:

Fault


Message
    Build of instance e2e5d915-cc9e-4309-802f-b03632ffd572 aborted: Failed to allocate the network(s), not rescheduling.
Code
    500
Details
    File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2014, in do_build_and_run_instance filter_properties) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2136, in _build_and_run_instance reason=msg) 
Created
    Dec. 31, 2014, 12:41 p.m.

we tried to find the problem looking at the log files: (i paste here the relevant logs) nova-compute.log:

root@ubuntuServer3:~# tail -40 /var/log/nova/nova-compute.log
2014-12-31 14:41:56.844 31313 INFO nova.compute.manager [-] [instance: e2e5d915-cc9e-4309-802f-b03632ffd572] VM Stopped (Lifecycle Event)
2014-12-31 14:41:56.945 31313 INFO nova.compute.manager [-] [instance: e2e5d915-cc9e-4309-802f-b03632ffd572] During sync_power_state the instance has a pending task (spawning). Skip.
2014-12-31 14:41:57.226 31313 INFO nova.virt.libvirt.driver [req-723bc64e-3a76-4a95-9686-a512a898c7ab None] [instance: e2e5d915-cc9e-4309-802f-b03632ffd572] Deleting instance files /var/lib/nova/instances/e2e5d915-cc9e-4309-802f-b03632ffd572_del
2014-12-31 14:41:57.227 31313 INFO nova.virt.libvirt.driver [req-723bc64e-3a76-4a95-9686-a512a898c7ab None] [instance: e2e5d915-cc9e-4309-802f-b03632ffd572] Deletion of /var/lib/nova/instances/e2e5d915-cc9e-4309-802f-b03632ffd572_del complete
2014-12-31 14:41:57.299 31313 ERROR nova.compute.manager [req-723bc64e-3a76-4a95-9686-a512a898c7ab None] [instance: e2e5d915-cc9e-4309-802f-b03632ffd572] Instance failed to spawn
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572] Traceback (most recent call last):
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2231, in _build_resources
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572]     yield resources
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 2101, in _build_and_run_instance
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572]     block_device_info=block_device_info)
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2621, in spawn
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572]     block_device_info, disk_info=disk_info)
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572]   File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 4435, in _create_domain_and_network
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572]     raise exception.VirtualInterfaceCreateException()
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572] VirtualInterfaceCreateException: Virtual Interface creation failed
2014-12-31 14:41:57.299 31313 TRACE nova.compute.manager [instance: e2e5d915-cc9e-4309-802f-b03632ffd572] 
2014-12-31 14:41:57.302 31313 AUDIT nova.compute.manager [req-723bc64e-3a76-4a95-9686-a512a898c7ab None] [instance: e2e5d915-cc9e-4309-802f-b03632ffd572] Terminating instance
2014-12-31 14:41:57.315 31313 WARNING nova.virt.libvirt.driver ...
(more)
edit retag flag offensive close merge delete

Comments

1

could you paste the nova.conf in the compute node to paste.openstack.org?

9lives gravatar image9lives ( 2014-12-31 08:24:12 -0500 )edit

thanks 9lives, here is the nova conf file: http://paste.openstack.org/show/155277/

i have added this line: vif_plugging_timeout: 0 to nova.conf in the compute and the controller node, and thats fixed the problem. but i dont know what "vif_plugging_timeout" means? i just saw it on some other post.

AlonFre gravatar imageAlonFre ( 2014-12-31 09:51:50 -0500 )edit

4 answers

Sort by ยป oldest newest most voted
3

answered 2014-12-31 10:09:10 -0500

9lives gravatar image

This issue is caused by the vif_plugging in nova.conf just config the like the following to resolve this issue, see the comments for each options:

#Fail instance boot if vif plugging fails
vif_plugging_is_fatal = False

#Number of seconds to wait for neutron vif
#plugging events to arrive before continuing or failing
#(see vif_plugging_is_fatal). If this is set to zero and
#vif_plugging_is_fatal is False, events should not be expected to arrive at all.
vif_plugging_timeout = 0

Hope that helps!

Vic

edit flag offensive delete link more

Comments

Thanks!, as i said, i only added: "vif_plugging_timeout: 0" and it worked. but what is the reason of the problem? and what is the drawback of this "vif_plugging" configurations?

AlonFre gravatar imageAlonFre ( 2014-12-31 10:20:14 -0500 )edit

one more thing, tthe attached device of teh new instance is: "compute:None". what that means?

AlonFre gravatar imageAlonFre ( 2014-12-31 10:23:09 -0500 )edit

Hello, it's great to have answers ! But do we have to reboot nova service ? In that case, it seems that on devstack, services are "screen" and we have to use CTRL+A+9 for nova. The problem is that I run devstack on a VM and the keyboard is not easy. Maybe a command can do the trick? Thanks !

devostack gravatar imagedevostack ( 2016-11-21 11:59:17 -0500 )edit
0

answered 2018-05-22 17:02:27 -0500

I came across this error myself and was able to fix the root cause. A couple of questions

  1. Are you using an ipa server for certificates?
  2. Did you check /var/log/neutron/server.log

I was seeing ssl errors in that log file when neutron tried to make a call to nova api. The issue was my ipa server did not have the ca cert required to make the api calls.

edit flag offensive delete link more
0

answered 2015-12-20 23:39:17 -0500

Satender Yadav gravatar image

open file /etc/nova/nova.conf and add below line in [DEFAULT]

vif_plugging_timeout=10 vif_plugging_is_fatal=False

edit flag offensive delete link more
0

answered 2015-09-15 04:31:59 -0500

I also resolved this error by putting

vif_plugging_timeout=10
vif_plugging_is_fatal=False

in the nova.conf file in the compute node.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2014-12-31 04:19:35 -0500

Seen: 10,351 times

Last updated: Dec 31 '14