Ask Your Question
0

Instance failed to spawn (BIOS Virtualisation enabled ?)

asked 2011-05-18 07:53:51 -0500

asly gravatar image

I installed nova on a single node, and when i want to launch a instance I got : Instance '11' failed to spawn. Is virtualization enabled in the BIOS? But VT extensions are enable in the bios and I can launch a VM manually with KVM.

Thank you for your help.

nova-compute log :

2011-05-18 09:40:50,792 DEBUG nova.rpc [-] received {'_context_request_id': 'I0STKANX3ZERKYNV6ZPJ', '_context_read_deleted': False, 'args': {'instance_id': 1 1, 'injected_files': None, 'availability_zone': None}, '_context_is_admin': True, '_context_timestamp': '2011-05-18T07:40:50Z', '_context_user': 'test', 'met hod': 'run_instance', '_context_project': 'cloudsii', '_context_remote_address': '10.6.214.2'} from (pid=3041) _receive /usr/lib/pymodules/python2.6/nova/rpc .py:167 2011-05-18 09:40:50,793 DEBUG nova.rpc [-] unpacked context: {'timestamp': '2011-05-18T07:40:50Z', 'remote_address': '10.6.214.2', 'project': 'cloudsii', 'is _admin': True, 'user': 'test', 'request_id': 'I0STKANX3ZERKYNV6ZPJ', 'read_deleted': False} from (pid=3041) _unpack_context /usr/lib/pymodules/python2.6/nova /rpc.py:331 2011-05-18 09:40:50,861 AUDIT nova.compute.manager [I0STKANX3ZERKYNV6ZPJ test cloudsii] instance 11: starting... 2011-05-18 09:40:51,002 DEBUG nova.rpc [-] Making asynchronous call on network.sf14-rennes ... from (pid=3041) call /usr/lib/pymodules/python2.6/nova/rpc.py: 350 2011-05-18 09:40:51,002 DEBUG nova.rpc [-] MSG_ID is be507b0cb4c64d2c8db96936ebd89056 from (pid=3041) call /usr/lib/pymodules/python2.6/nova/rpc.py:353 2011-05-18 09:40:51,465 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev vlan100 from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/uti ls.py:150 2011-05-18 09:40:51,471 DEBUG nova.utils [-] Attempting to grab semaphore "ensure_bridge" for method "ensure_bridge"... from (pid=3041) inner /usr/lib/pymodu les/python2.6/nova/utils.py:594 2011-05-18 09:40:51,471 DEBUG nova.utils [-] Attempting to grab file lock "ensure_bridge" for method "ensure_bridge"... from (pid=3041) inner /usr/lib/pymodu les/python2.6/nova/utils.py:599 2011-05-18 09:40:51,472 DEBUG nova.utils [-] Running cmd (subprocess): ip link show dev br100 from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils .py:150 2011-05-18 09:40:51,478 DEBUG nova.utils [-] Running cmd (subprocess): sudo route -n from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:150 2011-05-18 09:40:51,487 DEBUG nova.utils [-] Running cmd (subprocess): sudo ip addr show dev vlan100 scope global from (pid=3041) execute /usr/lib/pymodules/ python2.6/nova/utils.py:150 2011-05-18 09:40:51,497 DEBUG nova.utils [-] Running cmd (subprocess): sudo brctl addif br100 vlan100 from (pid=3041) execute /usr/lib/pymodules/python2.6/no va/utils.py:150 2011-05-18 09:40:51,506 DEBUG nova.utils [-] Result was 1 from (pid=3041) execute /usr/lib/pymodules/python2.6/nova/utils.py:166 2011-05-18 09:40:51,583 DEBUG nova.virt.libvirt_conn [-] instance instance-0000000b: starting toXML method from (pid=3041) to_xml /usr/lib/pymodules/python2. 6/nova/virt/libvirt_conn.py:996 2011-05-18 09:40:51,649 DEBUG nova.virt.libvirt_conn [-] instance instance-0000000b: finished toXML method from (pid=3041) to_xml /usr/lib/pymodules/python2. 6/nova/virt/libvirt_conn ... (more)

edit retag flag offensive close merge delete

8 answers

Sort by ยป oldest newest most voted
0

answered 2011-08-05 20:26:32 -0500

I have been using the --glance_api_servers flag. There were some zero sized items in _base, so I nuked them all.
Yet when I try to create an instance I still get the same problem (a zero sized item gets created in _base and no kernel file appears at all). I think the problem is with glance. I am seeing this in the glance log:

2011-08-05 20:19:16 DEBUG [eventlet.wsgi.server] 15.184.103.107 - - [05/Aug/2011 20:19:16] "GET /v1/images/27 HTTP/1.1" 200 0 1.031617 2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Processing request: HEAD /v1/images/28 Accept: 2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0 2011-08-05 20:23:38 DEBUG [routes.middleware] Matched HEAD /images/28 2011-08-05 20:23:38 DEBUG [routes.middleware] Route path: '/images/{id}', defaults: {'action': u'meta', 'controller': } 2011-08-05 20:23:38 DEBUG [routes.middleware] Match dict: {'action': u'meta', 'controller': , 'id': u'28'} 2011-08-05 20:23:38 DEBUG [eventlet.wsgi.server] 15.184.103.106 - - [05/Aug/2011 20:23:38] "HEAD /v1/images/28 HTTP/1.1" 200 948 0.010306 2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Processing request: HEAD /v1/images/28 Accept: 2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0 2011-08-05 20:23:38 DEBUG [routes.middleware] Matched HEAD /images/28 2011-08-05 20:23:38 DEBUG [routes.middleware] Route path: '/images/{id}', defaults: {'action': u'meta', 'controller': } 2011-08-05 20:23:38 DEBUG [routes.middleware] Match dict: {'action': u'meta', 'controller': , 'id': u'28'} 2011-08-05 20:23:38 DEBUG [eventlet.wsgi.server] 15.184.103.106 - - [05/Aug/2011 20:23:38] "HEAD /v1/images/28 HTTP/1.1" 200 948 0.009156 2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Processing request: HEAD /v1/images/28 Accept: 2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0 2011-08-05 20:23:38 DEBUG [routes.middleware] Matched HEAD /images/28 2011-08-05 20:23:38 DEBUG [routes.middleware] Route path: '/images/{id}', defaults: {'action': u'meta', 'controller': } 2011-08-05 20:23:38 DEBUG [routes.middleware] Match dict: {'action': u'meta', 'controller': , 'id': u'28'} 2011-08-05 20:23:38 DEBUG [eventlet.wsgi.server] 15.184.103.106 - - [05/Aug/2011 20:23:38] "HEAD /v1/images/28 HTTP/1.1" 200 948 0.009843 2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Processing request: HEAD /v1/images/27 Accept: 2011-08-05 20:23:38 DEBUG [glance.api.middleware.version_negotiation] Matched versioned URI. Version: 1.0 2011-08-05 20:23:38 DEBUG [routes.middleware] Matched HEAD /images/27 2011-08-05 20:23:38 DEBUG [routes.middleware] Route path: '/images/{id}', defaults: {'action': u'meta', 'controller': } 2011-08-05 20:23:38 DEBUG [routes.middleware] Match dict: {'action': u'meta', 'controller': , 'id': u'27'} 2011-08-05 20:23:38 DEBUG [eventlet.wsgi.server] 15.184.103.106 - - [05/Aug/2011 20:23:38] "HEAD /v1/images/27 HTTP/1 ... (more)

edit flag offensive delete link more
0

answered 2011-08-05 20:37:36 -0500

vishvananda gravatar image

do you have the same version of blance on both machines? dpkg -l | grep glance

edit flag offensive delete link more
0

answered 2011-05-18 09:27:47 -0500

asly gravatar image

I solve the first error with enabeling nbd, but I get a new error :

nova-compute log :

2011-05-18 10:58:37,830 DEBUG nova.utils [-] Running cmd (subprocess): sudo tune2fs -c 0 -i 0 /dev/nbd14 from (pid=2189) execute /usr/lib/pymodules/python2.6 /nova/utils.py:150 2011-05-18 10:58:37,859 DEBUG nova.utils [-] Result was 1 from (pid=2189) execute /usr/lib/pymodules/python2.6/nova/utils.py:166 2011-05-18 10:58:37,860 DEBUG nova.utils [-] Running cmd (subprocess): sudo qemu-nbd -d /dev/nbd14 from (pid=2189) execute /usr/lib/pymodules/python2.6/nova/ utils.py:150 2011-05-18 10:58:37,871 WARNING nova.virt.libvirt_conn [-] instance instance-0000000e: ignoring error injecting data into image 117357892 (Unexpected error w hile running command. Command: sudo tune2fs -c 0 -i 0 /dev/nbd14 Exit code: 1 Stdout: 'tune2fs 1.41.12 (17-May-2010)\n' Stderr: "tune2fs: Argument invalide lors de la tentative d'ouverture de /dev/nbd14\nImpossible de trouver un superbloc de syst\xc3\xa8me de fichiers valide.\ n") 2011-05-18 10:58:38,434 ERROR nova.exception [-] Uncaught exception (nova.exception): TRACE: Traceback (most recent call last): (nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/exception.py", line 120, in _wrap (nova.exception): TRACE: return f(args, *kw) (nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 617, in spawn (nova.exception): TRACE: domain = self._create_new_domain(xml) (nova.exception): TRACE: File "/usr/lib/pymodules/python2.6/nova/virt/libvirt_conn.py", line 1079, in _create_new_domain (nova.exception): TRACE: domain.createWithFlags(launch_flags) (nova.exception): TRACE: File "/usr/lib/python2.6/dist-packages/libvirt.py", line 369, in createWithFlags (nova.exception): TRACE: if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed', dom=self) (nova.exception): TRACE: libvirtError: internal error Process exited while reading console log output: char device redirected to /dev/pts/4 (nova.exception): TRACE: qemu: could not load kernel '/var/lib/nova/instances/instance-0000000e/kernel': Success (nova.exception): TRACE: (nova.exception): TRACE:

Translation of the french message : tune2fs: Invalid argument while trying to open / dev/nbd14 \ nUnable to find a valid superblock in filesystem

edit flag offensive delete link more
0

answered 2011-05-20 08:00:56 -0500

guanxiaohua2k6 gravatar image

I think it failed to download kernel to /var/lib/nova/instances/instance-00000000e/kernel. Please check whether the file exists and the size of the file is correct.

edit flag offensive delete link more
0

answered 2011-05-20 12:29:34 -0500

asly gravatar image

the file exists but it's empty.

#ls -l /var/lib/nova/instances/instance-00000000e/ -rw-r----- 1 root root 0 18 mai 10:58 console.log -rw-r--r-- 1 root root 6291456 18 mai 10:58 disk -rw-r--r-- 1 root root 8388608 18 mai 10:58 disk.local -rw-r--r-- 1 root root 0 18 mai 10:58 kernel -rw-r--r-- 1 nova nogroup 1917 18 mai 10:58 libvirt.xml -rw-r--r-- 1 root root 0 18 mai 10:58 ramdisk

edit flag offensive delete link more
0

answered 2011-08-05 19:30:58 -0500

I am seeing this same problem (zero sized kernel). This works on one compute node but not on a second node. I am using Glance as my image store. Has anybody identified a root cause for this?

edit flag offensive delete link more
0

answered 2011-08-05 19:37:25 -0500

vishvananda gravatar image

Generally it is due to missing settings for glance or wrong versions --glance_api_servers=host:port needs to be set or it will default to localhost and it will only get images on the machine that is running glance. You should also clear out the 0 sized images in _base

Vish

On Aug 5, 2011, at 12:31 PM, Jim Wichelman wrote:

Question #158002 on OpenStack Compute (nova) changed: https://answers.launchpad.net/nova/+q...

Jim Wichelman requested more information: I am seeing this same problem (zero sized kernel). This works on one compute node but not on a second node. I am using Glance as my image store. Has anybody identified a root cause for this?


You received this question notification because you are a member of Nova Core, which is an answer contact for OpenStack Compute (nova).

edit flag offensive delete link more
0

answered 2011-08-05 20:50:07 -0500

There is only once glance server (and two compute nodes). But the python-glance package on the compute nodes is different:

Node that works has: ii python-glance 2011.3~d1~20110531.139-0ubuntu0ppa1~natty1 Node that does not work has: ii python-glance 2011.3-d2-hp1

I'll run this one to ground and post an update (probably monday).

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2011-05-18 07:53:51 -0500

Seen: 304 times

Last updated: Aug 05 '11