Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Instance resizing fails when copy swap volume

Hi there,

running pike in a two compute node test installation with cinder volume and lvm, my installation fails to resize instances to a different flavor.

When the instance is created, a cinder volume for swap is created in /dev/cinder-volumes/<uuid>_disk.swap. There is no volume or disk information created in /var/lib/nova/instances, neither for root disk nor for swap.

When I then try to resize the instance via horizon, this fails with the error messages,

2018-02-24 16:05:23.806 1082 DEBUG oslo_concurrency.processutils [req-1cfc5de1-765a-4bf5-b1f4-7c7b550acbde 8287f31cfceb4d2bb0c4fde0e138ed6f 18e26500a539445a9567b1b5790ffb64 - a5b6d92142924b69b18bbde404da80da a5b6d92142924b69b18bbde404da80da] CMD "rm -rf /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b" returned: 0 in 0.019s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
2018-02-24 16:05:23.806 1082 DEBUG oslo_concurrency.processutils [req-1cfc5de1-765a-4bf5-b1f4-7c7b550acbde 8287f31cfceb4d2bb0c4fde0e138ed6f 18e26500a539445a9567b1b5790ffb64 - a5b6d92142924b69b18bbde404da80da a5b6d92142924b69b18bbde404da80da] Running cmd (subprocess): mv /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_resize /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
2018-02-24 16:05:23.825 1082 DEBUG oslo_concurrency.processutils [req-1cfc5de1-765a-4bf5-b1f4-7c7b550acbde 8287f31cfceb4d2bb0c4fde0e138ed6f 18e26500a539445a9567b1b5790ffb64 - a5b6d92142924b69b18bbde404da80da a5b6d92142924b69b18bbde404da80da] CMD "mv /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_resize /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b" returned: 0 in 0.019s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
2018-02-24 16:05:23.826 1082 ERROR nova.compute.manager [req-1cfc5de1-765a-4bf5-b1f4-7c7b550acbde 8287f31cfceb4d2bb0c4fde0e138ed6f 18e26500a539445a9567b1b5790ffb64 - a5b6d92142924b69b18bbde404da80da a5b6d92142924b69b18bbde404da80da] [instance: 6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b] Setting instance vm_state to ERROR: ProcessExecutionError: Unexpected error while running command.
Command: cp -r /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_resize/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_disk.swap /dev/cinder-volumes/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_disk.swap
Exit code: 1
Stdout: u''
Stderr: "cp: Aufruf von stat f\xc3\xbcr '/var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_resize/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_disk.swap' nicht m\xc3\xb6glich: Datei oder Verzeichnis nicht gefunden\n"
2018-02-24 16:05:23.826 1082 ERROR nova.compute.manager [instance: 6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b] Traceback (most recent call last):
2018-02-24 16:05:23.826 1082 ERROR nova.compute.manager [instance: 6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6864, in _error_out_instance_on_exception
2018-02-24 16:05:23.826 1082 ERROR nova.compute.manager [instance: 6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b]     yield

Means: Nova searches the swap disk somewhere in /var/lib/nova/instances instead of the lvm volume group for cinder. As the instance is stopped anyway, why is the swap volume tried to be copied at all instead of recreating it on the new node when starting the instance?

I experimented around with the nova config options $state_path and $instances_path and the configs underneath, but this has no effect, the error persists.

What make nova-compute search for the volumes in /var/lib/nova? How can I config nova that it should use the cinder volume group and the assigned volume swap in the cinder-volume volume group?

Any help is very much appreciated

I am looking forward to your reply

Best regards

Br