Ask Your Question

raphael-glon's profile - activity

2016-12-18 08:19:01 -0500 received badge  Notable Question (source)
2016-12-18 08:19:01 -0500 received badge  Famous Question (source)
2016-12-18 08:19:01 -0500 received badge  Popular Question (source)
2016-04-25 01:32:09 -0500 received badge  Notable Question (source)
2015-11-27 06:50:54 -0500 received badge  Popular Question (source)
2015-11-25 11:00:07 -0500 received badge  Student (source)
2015-11-25 04:18:54 -0500 received badge  Editor (source)
2015-11-25 03:09:21 -0500 answered a question easiest way to recover lost credentials

curl -X GET (

gives a password within the guest that is also available through nova api extension os-server-password. So by making sure some script within the guest sets the root password accordingly to this data, this would do the job

I'll use this one (note: any user can get the info with a simple curl request... so must be considered that a template with a password injected in such a way must be used for specific purposes like rescue only)

2015-11-24 14:45:02 -0500 asked a question easiest way to recover lost credentials

If a user has lost the ssh key he was using to connect to a virtual machine (data injected using metadata-server/cloudinit), is there some way to:

  • Update the ssh keys of a nova server for cloud-init to reinject it (or equivalently by making cloudinit inject it to the rescue instance) or
  • Inject the adminpass returned by nova rescue using cloudinit (assuming local hypervisor data injection is disabled)?

Otherwise this forces to rescue an instance with an image whose creds are known (or snapshot +download the snap and reinject)

Thank you

2015-11-24 14:45:02 -0500 asked a question libvirt driver, rescue: dead code ?


-> the flag rescue_image_id is dead code (see def rescue), still there but i think there is no case where the driver uses it because image_meta is never None (filled by nova.compute.manager.Manager._get_rescue_image)

-> meanwhile, CONF.libvirt.rescue_kernel_id and CONF.libvirt.rescue_ramdisk_id are still taken into account, which causes consistency problems

Does someone confirm ?

Even if the api offers the possibility to specify the rescue image, having this flag to specify a default rescue image when none was specified (instead of taking the rescued vm base image ) allowed to bypass problems such as the one reported here ( (for example by having a dedicated rescue image and playing with the kernel_id, ramdisk_id and os_command_line='root=UUID=<>...' property, we can make sure that the fs mounted on / is always the rescue one, no matter if the rescue disk ends up being vda, vdb or something else)

Any reason why this flag is not usable anymore ?

Thank you