Ask Your Question
0

nova ceph copy-on-write images doesn't work even if it's configured to

asked 2018-05-13 12:45:16 -0500

Damian Dąbrowski gravatar image

Hello,

I've a problem.

I have Openstack + ceph cluster and I'm using RAW images.

Unfortunately, only cinder volumes is copy-on-write clones. If i create nova instance its disk is flattened.

during instance creation i see(its okay i see the parent):

# rbd info compute/e9414caf-f60a-43bf-929f-45bab96550a9_disk
rbd image 'e9414caf-f60a-43bf-929f-45bab96550a9_disk':
        size 2252 MB in 282 objects
        order 23 (8192 kB objects)
        block_name_prefix: rbd_data.3271361285c77
        format: 2
        features: layering, striping
        flags:
        parent: images/83f1ae0a-864a-4130-a4a3-0fd492cc100e@snap
        overlap: 2252 MB
        stripe unit: 8192 kB
        stripe count: 1

when instance is already created i don't see parent image

# rbd info compute/e9414caf-f60a-43bf-929f-45bab96550a9_disk
rbd image 'e9414caf-f60a-43bf-929f-45bab96550a9_disk':
        size 40960 MB in 5120 objects
        order 23 (8192 kB objects)
        block_name_prefix: rbd_data.3271361285c77
        format: 2
        features: layering, striping
        flags:
        stripe unit: 8192 kB
        stripe count: 1

My glance-api.conf seems to be fine(pasting important lines):

show_image_direct_url = True
show_multiple_locations = True
enable_v2_api = true

Maybe someone have any idea why it isn't working or how can i debug this?

edit retag flag offensive close merge delete

Comments

I'm not sure, but this could depend on the default image features you have configured in your ceph cluster. My nova disks are CoW, but in the info output they show the same size as the base image. But the missing parent is strange indeed. Can you paste your versions (ceph, openstack, ceph.conf)?

eblock gravatar imageeblock ( 2018-05-14 03:30:42 -0500 )edit

3 answers

Sort by » oldest newest most voted
1

answered 2018-05-14 05:05:07 -0500

tze gravatar image

updated 2018-05-14 05:06:14 -0500

Your configuration in glance-api.conf is not correct. The show_multiple_locations = True is for file store, you are use rbd store, so only the show_image_direct_url = True needed.

The v2_api is enabled by default (in Ocata release that i am running) so in my config i have only disabled the v1 api and registry with configs:

enable_v1_api = false
enable_v1_registry = false
edit flag offensive delete link more

Comments

My Mitaka conf:

show_image_direct_url = true
show_multiple_locations = false
enable_v1_api = true
enable_v1_registry = true
enable_v2_api = true
enable_v2_registry = true

Probably not a feature problem, raw images is correct.

eblock gravatar imageeblock ( 2018-05-15 03:14:53 -0500 )edit

Found a similar description here.

eblock gravatar imageeblock ( 2018-05-15 03:15:17 -0500 )edit

Damian i am too updated than you.Sorry. Mitaka is two releases back as my stack is so maybe my suggestion for the api and registry is not correct.Leave it as is. I had the same error and i opened the debug on compute's nova.conf and after the log i understand that show_multiple_locations must be 0.

tze gravatar imagetze ( 2018-05-15 03:37:49 -0500 )edit
0

answered 2018-05-14 09:42:02 -0500

Damian Dąbrowski gravatar image

Hello,

Thanks for Your replies. I'm sending my software versions below. I'm using Mirantis Fuel 9.2(based on mitaka).

@eblock

nova-common: 13.1.4-7~u14.04+mos54 
cinder-common: 8.1.1-6~u14.04+mos30
glance-common: 12.0.0-5~u14.04+mos26
keystone: 9.3.0-4~u14.04+mos10 ceph:
ceph: 0.94.9-1~u14.04+mos3

@tze Ceph documentation says that i need both of these lines on Mitaka. It's not true?

I've tried to force off v1_api and v1_registry like you said but then horizon & python-openstackclient stopped working properly, for ex. openstack image list worked fine but openstack server create can't find my image and returning error Could not find resource <image_id>

edit flag offensive delete link more

Comments

1

Did you resolve this in the meantime? I stumbled upon a similar problem, only some of my ephemeral disks were real cow-clones, but sometimes the images were flattened. If the image's disk_format is not raw it will download the image to the compute node, convert it to raw and re-upload it to Ceph.

eblock gravatar imageeblock ( 2018-10-10 04:03:34 -0500 )edit

I couldn't reproduce it reliably yet, but one time I wanted to upload a volume to glance and saw that it was uploaded with disk_format "qcow2" resulting in the described behavior. Maybe you can check your images with:

glance --os-image-api-version 1 image-list

and look for non-raw images.

eblock gravatar imageeblock ( 2018-10-10 04:06:45 -0500 )edit

Mh, I just saw that for a short period of time you're seeing the parent image, which I don't in my case, so probably it's a different issue.

eblock gravatar imageeblock ( 2018-10-10 04:19:03 -0500 )edit

Hi eblock, I apologize for the long response time. Yeah, all my images are raw and still it doesn't work.I've just abandoned this case because in few month I'll switch from mitaka to latest release and hope this will start to work properly.

Damian Dąbrowski gravatar imageDamian Dąbrowski ( 2018-12-01 13:44:07 -0500 )edit
0

answered 2018-11-29 04:37:13 -0500

eblock gravatar image

updated 2019-02-01 09:00:08 -0500

The ceph docs state:

show_multiple_locations = True show_image_direct_url = True

This only seems to be relevant for Mitaka, for other releases only show_image_direct_url = true is necessary. But I'm running Ocata and just recently I changed show_multiple_locations to false, since the config description reports it as deprecated. But then I got error messages from nova during (live) snapshot creation

ImageNotAuthorized: Not authorized for <IMAGE>

and had to switch that option back to true. Now the snapshots are working again. I'm not sure if this is a bug in Ocata or if I missed something during the upgrades from Mitaka to Ocata.

EDIT: I found the problem in my environment. The solution is here. Setting cache_image = none on all compute nodes (/etc/nova/nova.conf) disabled the local cache in the local filesystem on the compute nodes (/var/lib/nova/instances/_base).

I believe my issue was the following: I had launched a couple of instances with a wrong base image property (not raw). So the compute node(s) created a base file locally and uploaded the falt image back to Ceph. After fixing the raw format issue nothing changed for the compute nodes, for some reason they still needed to have a local copy, maybe some metadata somewhere, I couldn't really find out. Anyway, after I found the mentioned thread (actually I'm wondering why it took me so long, it seems pretty obvious now) I disabled the cache_image option and restarted all compute services. Launching a new instance from the same base image as before doesn't result in a flat image anymore, it's a copy-on-write clone now, just like it's supposed to be.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2018-05-13 12:45:16 -0500

Seen: 443 times

Last updated: Feb 01