Ask Your Question
0

Issue with creating a volume from image via cinder

asked 2019-05-01 08:25:43 -0600

M. Piscaer gravatar image

updated 2019-05-02 18:35:38 -0600

Hi,

I have an issue with creating volumes from a image. The glance and cinder has Backend ceph.

I have a raw image that I uploaded via Horizon. This image will be placed on the Ceph cluster by Horizon.

From this image I create a volume, after that I create a image from that volume.

If this all done with on the same cinder-volume there is no problem.

The Openstack cluster is HA via HAproxy and it is loadbalanced via three controllers, and the installed Openstack version is: Rocky 3.16.2.

root@controller3-utility-container-db19e661:~# openstack volume service list
+------------------+---------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                  | Zone | Status  | State | Updated At                 |
+------------------+---------------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | controller3@RBD                           | nova | enabled | up    | 2019-05-02T08:45:08.000000 |
| cinder-backup    | controller1                               | nova | enabled | up    | 2019-05-02T08:44:59.000000 |
| cinder-backup    | controller2                               | nova | enabled | up    | 2019-05-02T08:45:04.000000 |
| cinder-backup    | controller3                               | nova | enabled | up    | 2019-05-02T08:45:00.000000 |
| cinder-scheduler | controller1-cinder-api-container-729fc731 | nova | enabled | up    | 2019-05-02T08:44:59.000000 |
| cinder-volume    | controller1@RBD                           | nova | enabled | up    | 2019-05-02T08:44:59.000000 |
| cinder-volume    | controller2@RBD                           | nova | enabled | up    | 2019-05-02T08:44:58.000000 |
| cinder-scheduler | controller3-cinder-api-container-d4b77c12 | nova | enabled | up    | 2019-05-02T08:45:05.000000 |
| cinder-scheduler | controller2-cinder-api-container-42579f3d | nova | enabled | up    | 2019-05-02T08:45:02.000000 |
+------------------+---------------------------------------+------+---------+-------+----------------------------+

To reproduce the issue I do the following steps:

I have uploaded a image-bionic with ID and direct_url:

id: ceea8588-a95c-40b2-a7e2-310413725b8d
direct_url: rbd://956c320d-b32f-4851-a5e7-2e7728df33b1/images/ceea8588-a95c-40b2-a7e2-310413725b8d/snap

1- Stop cinder-volume on controller1 & controller3

2- create volume from image-bionic and name them bionic-controller2

# cinder create --image ceea8588-a95c-40b2-a7e2-310413725b8d --name bionic-controller2 30 This creates volume ID: 7fe4a203-13db-4369-bdfe-fd9f5481b8a9

3- make a image from image-bionic and name them bionic-controller2-image

# cinder upload-to-image --force True --container-format bare --disk-format raw 7fe4a203-13db-4369-bdfe-fd9f5481b8a9 bionic-controller2-image

This creates image ID: 2f71e414-cba5-475f-a7bb-f2c10755878c

5- create volume from image image-bionic

# cinder create --image 2f71e414-cba5-475f-a7bb-f2c10755878c --name bionic-controller2-image--bionic-controller2 30

6- stop on controller2 cinder-volume and start on controller3 cinder-volume

7- Create a image from bionic-controller2, this the image sticks on status "uploading", till controller2 is back up.

# cinder upload-to-image --force True --container-format bare --disk-format raw 7fe4a203-13db-4369-bdfe-fd9f5481b8a9 bionic-controller3-image

8- Create a volume from bionic-controller2-image, gets an error state

# cinder create --image 2f71e414-cba5-475f-a7bb-f2c10755878c --name bionic-controller2-image--bionic-controller3 30

The systemctl status cinder-volume on the controller3 says that it wants to do a rbd import, But I thought that is should clone the disk

9- If I create a volume from image-bionic and then from that volume a new image, that will be successful

In the /etc/cinder/cinder.conf I have the setting

image_upload_use_cinder_backend = True

Is this related or do we have a different issue?

Edit 2:

I deleted the old machine, so I first needed to create a new volume:

root@controller3-utility-container-db19e661:~# cinder create --image ceea8588-a95c-40b2-a7e2-310413725b8d --name bionic-controller2 30
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2019-05-02T14:14:21.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | cc5692ec-fb7f-4848-b45f-943131a08254 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | bionic-controller2                   |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 19b2a69ef0504e858e4cccd84201a364     |
| replication_status             | None                                 |
| size                           | 30                                   |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | c11d00fd2bcc40019b744d1cef81268b     |
| volume_type                    | RBD                                  |
+--------------------------------+--------------------------------------+

root@controller3-utility-container-db19e661:~# openstack volume show cc5692ec-fb7f-4848-b45f-943131a08254
+--------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                          | Value                                                                                                                                                                                                                                                                                                                                                                                                       |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments                    | []                                                                                                                                                                                                       |
| availability_zone              | nova
| bootable                       | true
| consistencygroup_id            | None
| created_at                     | 2019-05-02T14:14 ...
(more)
edit retag flag offensive close merge delete

Comments

Could you please reformat your question and use the code tags (there's a button for that)? Are you just using three control nodes or are they in a HA-cluster? Could you also add the output of openstack volume service list to the question?

eblock gravatar imageeblock ( 2019-05-02 03:26:14 -0600 )edit

I just updated the question

M. Piscaer gravatar imageM. Piscaer ( 2019-05-02 06:33:10 -0600 )edit

Please paste openstack volume show 7fe4a203-13db-4369-bdfe-fd9f5481b8a9. I'm not sure if this actually is the reason, but a volume is linked to a specific backend, so in your case it should be controller2@RBD. If controller2 is down, the volume can't be used although it's only one backend (ceph).

eblock gravatar imageeblock ( 2019-05-02 08:29:53 -0600 )edit

I added openstack volume show command

M. Piscaer gravatar imageM. Piscaer ( 2019-05-02 18:36:27 -0600 )edit

I'm not very familiar with a correct HA setup, but I would expect only one cinder backend, pointing to the VIP instead of three different control nodes. Can you show which guide for HA cinder deployment you used?

eblock gravatar imageeblock ( 2019-05-03 02:08:28 -0600 )edit

1 answer

Sort by ยป oldest newest most voted
0

answered 2019-05-20 01:58:02 -0600

M. Piscaer gravatar image

On the cinder-volume and cinder-backup nodes we changed in the cinder.conf the following variable: host = ceph-backend.

After that we restart the cinder service and removed the old cinder volume and backup service:

/openstack/venvs/cinder-18.1.4/bin/cinder-manage service remove cinder-backup controller1

This will result in:

# openstack volume service list
+------------------+---------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                  | Zone | Status  | State | Updated At                 |
+------------------+---------------------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller1-cinder-api-container-729fc731 | nova | enabled | up    | 2019-05-20T06:50:13.000000 |
| cinder-scheduler | controller3-cinder-api-container-d4b77c12 | nova | enabled | up    | 2019-05-20T06:50:13.000000 |
| cinder-scheduler | controller2-cinder-api-container-42579f3d | nova | enabled | up    | 2019-05-20T06:50:07.000000 |
| cinder-backup    | ceph-backend                          | nova | enabled | up    | 2019-05-20T06:50:12.000000 |
| cinder-volume    | ceph-backend@RBD                      | nova | enabled | up    | 2019-05-20T06:50:12.000000 |
+------------------+---------------------------------------+------+---------+-------+----------------------------+
edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2019-05-01 08:25:43 -0600

Seen: 171 times

Last updated: May 02