How to migrate/retype volumes with Cinder?

asked 2019-08-26 16:11:20 -0500

sacha-m gravatar image

updated 2019-08-27 21:55:21 -0500

I'm trying to move a volume from Host A to Host B. I'm using Cinder. I'm not using Ceph nor anything for cinder-backup (cinder-backup is not working, but cinder-volume it is).

  • In host A I have several volume types: lvm-a-sdc, lvm-a-sdd, etc..
  • In host B I have this volume types: lvm-b-sdc, lvm-b-sdd, etc..
  • In hosts A and B I have PV: /dev/sdc, /dev/sdd, etc.
  • In host A I have this VG: cinder-vol-a-sdc, cinder-vol-a-sdd, etc.
  • In host A I have this VG: cinder-vol-b-sdc, cinder-vol-b-sdd, etc.

I linked VG with volume types with:

openstack volume type create lvm-a-sdc

and:

openstack volume type set --property volume_backend_name=lvm-a-2 lvm-a-sdc

And finally in cinder.conf:

[lvm-a-2]
volume_group = cinder-vol-a-sdc
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name = lvm-a-2
iscsi_helper = tgtadm
iscsi_protocol = iscsi

Now, I have a volume in /dev/sdc from host A (so, its type is lvm-a-sdc). I want to have it in /dev/sdd from Host B. I think what I have to do is to retype it. So, what I did is in Hosrizon, as a user, I umount the volume from the instance, then I detach the volume from any instance in horizon, and then I select "Change Volume Type":

  • I select type lvm-b-sdd
  • I select Migration Policy "On Demand" and aldo "None"

In both cases, same result. Nothing.

# cinder show f0d03bf9-0d03-4ecf-bc09-a28d794861d3
+--------------------------------+------------------------------------------------+
| Property                       | Value                                          |
+--------------------------------+------------------------------------------------+
| attached_servers               | []                                             |
| attachment_ids                 | []                                             |
| availability_zone              | nova                                           |
| bootable                       | false                                          |
| consistencygroup_id            | None                                           |
| created_at                     | 2019-07-09T18:49:50.000000                     |
| description                    |                                                |
| encrypted                      | False                                          |
| group_id                       | None                                           |
| id                             | f0d03bf9-0d03-4ecf-bc09-a28d794861d3           |
| metadata                       |                                                |
| migration_status               | error                                          |
| multiattach                    | False                                          |
| name                           | vol_es02                                       |
| os-vol-host-attr:host          | host1.domain.local@lvm-a-2#lvm-a-2             |
| os-vol-mig-status-attr:migstat | error                                          |
| os-vol-mig-status-attr:name_id | None                                           |
| os-vol-tenant-attr:tenant_id   | 8e6ef80317344d8699d2862a74a2f567               |
| provider_id                    | None                                           |
| replication_status             | None                                           |
| service_uuid                   | ea45fd29-8446-433a-b3b5-b59328de46af           |
| shared_targets                 | False                                          |
| size                           | 820                                            |
| snapshot_id                    | None                                           |
| source_volid                   | None                                           |
| status                         | retyping                                       |
| updated_at                     | 2019-08-26T18:30:33.000000                     |
| user_id                        | dfff51be035147dd9a0d60f6ad01312f               |
| volume_type                    | lvm-a-sdc                                      |
+--------------------------------+------------------------------------------------+

What I have to do to make it work?

UPDATE: I have this logs from cinder-volume.log in Host B:

2019-08-27 22:29:14.798 37 INFO cinder.volume.flows.manager.create_volume [req-0d873fce-e167-48e8-a5ce-12903634d96b dfff51be035147dd9a0d60f6ad01312f 8e6ef80317344d8699d2862a74a2f567 - default default] Volume 7afb3752-c557-435d-936d-c9aedf848b27: being created as raw with specification: {'status': u'creating', 'volume_size': 820, 'volume_name': u'volume-7afb3752-c557-435d-936d-c9aedf848b27'}
2019-08-27 22:29:15.705 37 INFO cinder.volume.flows.manager.create_volume [req-0d873fce-e167-48e8-a5ce-12903634d96b dfff51be035147dd9a0d60f6ad01312f 8e6ef80317344d8699d2862a74a2f567 - default default] Volume volume-7afb3752-c557-435d-936d-c9aedf848b27 (7afb3752-c557-435d-936d-c9aedf848b27): created successfully
2019-08-27 22:29:15.772 37 INFO cinder.volume.manager [req-0d873fce-e167-48e8-a5ce-12903634d96b dfff51be035147dd9a0d60f6ad01312f 8e6ef80317344d8699d2862a74a2f567 - default default] Created volume successfully.
2019-08-27 22:29:25.810 37 INFO cinder.volume.manager [req-0d873fce-e167-48e8-a5ce-12903634d96b dfff51be035147dd9a0d60f6ad01312f 8e6ef80317344d8699d2862a74a2f567 - default default] Initialize volume connection completed successfully.
2019-08-27 22:29:54.871 37 INFO cinder.volume.targets.tgt [req-0d873fce-e167-48e8-a5ce-12903634d96b dfff51be035147dd9a0d60f6ad01312f 8e6ef80317344d8699d2862a74a2f567 - default default] Removing iscsi_target for Volume ID: 7afb3752-c557-435d-936d-c9aedf848b27
2019-08-27 22:29:59.702 37 INFO cinder.volume.drivers.lvm [req-0d873fce-e167-48e8-a5ce-12903634d96b dfff51be035147dd9a0d60f6ad01312f 8e6ef80317344d8699d2862a74a2f567 - default default] Successfully deleted volume: 7afb3752-c557-435d-936d-c9aedf848b27
2019-08-27 22:29:59.758 37 INFO cinder.volume.manager [req-0d873fce-e167-48e8-a5ce-12903634d96b dfff51be035147dd9a0d60f6ad01312f 8e6ef80317344d8699d2862a74a2f567 - default default] Deleted volume successfully.

And here is the same thing, but in DEBUG (starting in "Initialize volume connection completed successfully" part of the log):

2019-08-27 22:48:53.945 37 INFO cinder.volume.manager [req-d82ac28a-30d1-443c-8324-757d8ca93b35 dfff51be035147dd9a0d60f6ad01312f 8e6ef80317344d8699d2862a74a2f567 ...
(more)
edit retag flag offensive close merge delete

Comments

While I have never migrated volumes, I would

  1. try the same action on the command line, also with --debug to see the APIs and their results
  2. look into the Cinder API log, and the Cinder Volume logs on both Storage hosts
Bernd Bausch gravatar imageBernd Bausch ( 2019-08-26 20:13:17 -0500 )edit

@bernard I'm stating to think cinder doesn't like the fact the volume type (don't know if the original or the target one) is not "cinder-volumes" (I have "cinder-vol-b-sdc", etc.). Please look at my update.

sacha-m gravatar imagesacha-m ( 2019-08-27 21:56:08 -0500 )edit

I also created a 1GB volume and moved it (successfully) from Host A to Host B. After the "Initialize volume connection completed successfully" log line, appears one saying "Terminate volume connection completed successfully".

sacha-m gravatar imagesacha-m ( 2019-08-27 22:10:01 -0500 )edit

I checked the documentation, and it seems that to migrate a volume you should use openstack volume migrate.

Bernd Bausch gravatar imageBernd Bausch ( 2019-08-27 23:07:52 -0500 )edit

On the other hand, retyping seems to include migration when necessary.

Bernd Bausch gravatar imageBernd Bausch ( 2019-08-27 23:15:22 -0500 )edit