Ask Your Question
0

can I delete a snapshot an instance was made from

asked 2017-12-05 16:20:07 -0600

jamesopst gravatar image

updated 2017-12-05 17:25:55 -0600

hi all,

migration in my newton environment wasn't working quite right. so I turned off all the instances I needed to migrate and created a snapshot of them. these snapshots were then images in glance (on a separate storage node). I recreated the instances on different compute nodes using those snapshots for the images.

once the new instances where up and running good I deleted those snapshots from images. Everything still was running good. At some point a few days later the instances got into an Error status and were inaccessible (the user may have had rebooted the instances by this point, I don't know for sure).

When trying to restart these instances the error message I get is:

Message
    Image 0894d367-1132-47c4-944d-e80bbeb2088e could not be found.
Code
    404

Indeed, image 0894xxxx... does not exist. I can only assume this image was the snapshot I had created in glance originally. And I had expected that this image was no longer needed, as it was copied to the compute node on instance creation.

I am confused, can I not delete a snapshot an instance was made from?

thanks, here is the nova log from during a failed restart. (sorry it is so long is this how best to post logs here?)

2017-12-05T22:26:56.937761+00:00 node-103 nova-compute: 2017-12-05 22:26:56.937 40391 DEBUG oslo_messaging._drivers.amqpdriver [-] received message with unique_id: 57db88ac23824e448f6190b91e34e8ca __call__ /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:196
2017-12-05T22:26:56.943086+00:00 node-103 nova-compute: 2017-12-05 22:26:56.942 40391 DEBUG oslo_messaging._drivers.amqpdriver [req-382118bd-fead-415f-989c-d6d6b2705489 b531c96d3755453f9fb02480aeca9554 d7974c32475b435ca967225dee799360 - - -] CALL msg_id: 142b4f66ddb34056a474f9a5bea411fc exchange 'nova' topic 'conductor' _send /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:448
2017-12-05T22:26:56.953473+00:00 node-103 nova-compute: 2017-12-05 22:26:56.953 40391 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 142b4f66ddb34056a474f9a5bea411fc __call__ /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:296
2017-12-05T22:26:56.954380+00:00 node-103 nova-compute: 2017-12-05 22:26:56.953 40391 INFO nova.compute.manager [req-382118bd-fead-415f-989c-d6d6b2705489 b531c96d3755453f9fb02480aeca9554 d7974c32475b435ca967225dee799360 - - -] [instance: f853abdf-5503-4608-ab07-a713fa1ec086] Rebooting instance
2017-12-05T22:26:56.955506+00:00 node-103 nova-compute: 2017-12-05 22:26:56.955 40391 DEBUG oslo_messaging._drivers.amqpdriver [req-382118bd-fead-415f-989c-d6d6b2705489 b531c96d3755453f9fb02480aeca9554 d7974c32475b435ca967225dee799360 - - -] CALL msg_id: 64e9f8a6b3374464b484d335da7a9a15 exchange 'nova' topic 'conductor' _send /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:448
2017-12-05T22:26:56.971545+00:00 node-103 nova-compute: 2017-12-05 22:26:56.971 40391 DEBUG oslo_messaging._drivers.amqpdriver [-] received reply msg_id: 64e9f8a6b3374464b484d335da7a9a15 __call__ /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:296
2017-12-05T22:26:56.973509+00:00 node-103 nova-compute: 2017-12-05 22:26:56.973 40391 DEBUG oslo_concurrency.lockutils [req-382118bd-fead-415f-989c-d6d6b2705489 b531c96d3755453f9fb02480aeca9554 d7974c32475b435ca967225dee799360 - - -] Acquired semaphore "refresh_cache-f853abdf-5503-4608-ab07-a713fa1ec086" lock /usr/lib/python2.7/dist-packages/oslo_concurrency/lockutils.py:212
2017-12-05T22:26:56.973945+00:00 node-103 nova-compute: 2017-12-05 22:26:56.973 40391 DEBUG nova.network.neutronv2.api [req-382118bd-fead-415f-989c-d6d6b2705489 b531c96d3755453f9fb02480aeca9554 d7974c32475b435ca967225dee799360 - - -] [instance: f853abdf-5503-4608-ab07-a713fa1ec086] _get_instance_nw_info() _get_instance_nw_info /usr/lib/python2.7/dist-packages/nova/network/neutronv2/api.py:1276
2017-12-05T22:26:56.974880+00:00 node-103 nova-compute: 2017-12-05 22:26:56.974 40391 DEBUG oslo_messaging ...
(more)
edit retag flag offensive close merge delete

Comments

Obviously, the instance was in a state that required restarting it from the image.

How did you restart the instance? Are there any log messages about the error condition?

Bernd Bausch gravatar imageBernd Bausch ( 2017-12-05 16:54:53 -0600 )edit

thanks Bernd. I added the nova log from during a failed restart.

(sorry it is so long is this how best to post logs here?)

also could you clarify what you mean in your first line? I'm not following exactly. Are you saying openstack automatically restarted from the initial image for some reason?

jamesopst gravatar imagejamesopst ( 2017-12-05 17:28:15 -0600 )edit

i don't know for sure how the user initially restarted the instance, probably from the linux OS's command line. The instance was in an error state when I found it earlier this week. I've found a command line hard reboot work well in that case. So I've been running

nova reboot --hard instance-name
jamesopst gravatar imagejamesopst ( 2017-12-06 08:20:50 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2017-12-06 00:25:24 -0600

Deepa gravatar image

Whether the instance created from snapshot is volume based or just booted from snapshot ?

edit flag offensive delete link more

Comments

thanks for the input. the instance was booted from the snapshot. the snapshot wasn't made into a volume. in this case isn't the image copied to the compute nodes and not accessed from glance? thx

jamesopst gravatar imagejamesopst ( 2017-12-06 08:17:58 -0600 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2017-12-05 16:20:07 -0600

Seen: 32 times

Last updated: Dec 06