Ask Your Question
0

Instance resizing fails when copy swap volume -config error?

asked 2018-02-25 05:17:05 -0600

bringha gravatar image

Hi there,

running pike in a two compute node test installation with cinder volume and lvm, my installation fails to resize instances to a different flavor.

When the instance is created, a cinder volume for swap is created in /dev/cinder-volumes/<uuid>_disk.swap. There is no volume or disk information created in /var/lib/nova/instances, neither for root disk nor for swap.

When I then try to resize the instance via horizon, this fails with the error messages,

2018-02-24 16:05:23.806 1082 DEBUG oslo_concurrency.processutils [req-1cfc5de1-765a-4bf5-b1f4-7c7b550acbde 8287f31cfceb4d2bb0c4fde0e138ed6f 18e26500a539445a9567b1b5790ffb64 - a5b6d92142924b69b18bbde404da80da a5b6d92142924b69b18bbde404da80da] CMD "rm -rf /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b" returned: 0 in 0.019s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
2018-02-24 16:05:23.806 1082 DEBUG oslo_concurrency.processutils [req-1cfc5de1-765a-4bf5-b1f4-7c7b550acbde 8287f31cfceb4d2bb0c4fde0e138ed6f 18e26500a539445a9567b1b5790ffb64 - a5b6d92142924b69b18bbde404da80da a5b6d92142924b69b18bbde404da80da] Running cmd (subprocess): mv /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_resize /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:355
2018-02-24 16:05:23.825 1082 DEBUG oslo_concurrency.processutils [req-1cfc5de1-765a-4bf5-b1f4-7c7b550acbde 8287f31cfceb4d2bb0c4fde0e138ed6f 18e26500a539445a9567b1b5790ffb64 - a5b6d92142924b69b18bbde404da80da a5b6d92142924b69b18bbde404da80da] CMD "mv /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_resize /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b" returned: 0 in 0.019s execute /usr/lib/python2.7/dist-packages/oslo_concurrency/processutils.py:385
2018-02-24 16:05:23.826 1082 ERROR nova.compute.manager [req-1cfc5de1-765a-4bf5-b1f4-7c7b550acbde 8287f31cfceb4d2bb0c4fde0e138ed6f 18e26500a539445a9567b1b5790ffb64 - a5b6d92142924b69b18bbde404da80da a5b6d92142924b69b18bbde404da80da] [instance: 6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b] Setting instance vm_state to ERROR: ProcessExecutionError: Unexpected error while running command.
Command: cp -r /var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_resize/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_disk.swap /dev/cinder-volumes/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_disk.swap
Exit code: 1
Stdout: u''
Stderr: "cp: Aufruf von stat f\xc3\xbcr '/var/lib/nova/instances/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_resize/6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b_disk.swap' nicht m\xc3\xb6glich: Datei oder Verzeichnis nicht gefunden\n"
2018-02-24 16:05:23.826 1082 ERROR nova.compute.manager [instance: 6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b] Traceback (most recent call last):
2018-02-24 16:05:23.826 1082 ERROR nova.compute.manager [instance: 6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 6864, in _error_out_instance_on_exception
2018-02-24 16:05:23.826 1082 ERROR nova.compute.manager [instance: 6a1fb13e-9305-4854-b7f6-02d1c5ae3a4b]     yield

Means: Nova searches the swap disk somewhere in /var/lib/nova/instances instead of the lvm volume group for cinder. As the instance is stopped anyway, why is the swap volume tried to be copied at all instead of recreating it on the new node when starting the instance?

I experimented around with the nova config options $state_path and $instances_path and the configs underneath, but this has no effect, the error persists.

What make nova-compute search for the volumes in /var/lib/nova? How can I config nova that it should use the cinder volume group and the assigned volume swap in the cinder-volume volume group?

Any help is very much appreciated

I am looking forward to your reply

Best regards

Br

edit retag flag offensive close merge delete

Comments

Hi, lvm volume group for cinder, also can you show your cinder.conf file. Which storage device you using.

TijoV gravatar imageTijoV ( 2018-02-26 07:01:21 -0600 )edit

2 answers

Sort by ยป oldest newest most voted
0

answered 2018-02-27 04:26:08 -0600

TijoV gravatar image

Please take a look at this link it information to what you looking (What make nova-compute search for the volumes in /var/lib/nova?)

How can I config nova that it should use the cinder volume group and the assigned volume swap in the cinder-volume volume group?

Your cinder.conf looks fine

edit flag offensive delete link more

Comments

Thanks TijoV for the link. Unfortunately it does not contain any detailed information how I can modify the config of nova/cinder so that swap volumes are handled in the desired way. Still stuck with instance resizing

bringha gravatar imagebringha ( 2018-03-02 05:19:48 -0600 )edit
0

answered 2018-02-26 14:52:38 -0600

bringha gravatar image

updated 2018-03-18 08:15:36 -0600

Hi there,

my cinder.conf look like

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
transport_url = rabbit://openstack:blabla@controller
my_ip = 10.0.88.31
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:blabla@controller/cinder

#[oslo_messaging_rabbit]
#rabbit_host = controller
#rabbit_userid = openstack
#rabbit_password = blabla

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = blabla

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
lvm_type = default

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

EDIT: After some further testing around, behavior is obviously dependent of the config item of nova.conf in the libvirt section.In my config, each compute host is in parallel also holding a cinder-volume service based on lvm. My libvirt section in nova.conf looks like

[libvirt]
images_type = lvm
images_volume_group = cinder-volumes 
live_migration_tunnelled = True

ie using the cinder volume group also for nova ephermal disks. This obviously causes nova to create also swap as a lvm volume. If this assumption would be right, why is nova then still searching the swap volume in /var/lib/nova/instances/<uuid> ? Still not found a way how to change that ....

EDIT:

Further retesting solved this now only for flavors without swap at all. (overlooked that the chosen instance ran on a flavor without swap when testing - sorry)

With instances having running on a flavor with swap files, problem ist still persisting.

Relevant python code in /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py is

for info in disk_info:
            # assume inst_base == dirname(info['path'])
            img_path = info['path']
            fname = os.path.basename(img_path)
            from_path = os.path.join(inst_base_resize, fname)

            # We will not copy over the swap disk here, and rely on
            # finish_migration to re-create it for us. This is ok because
            # the OS is shut down, and as recreating a swap disk is very
            # cheap it is more efficient than copying either locally or
            # over the network. This also means we don't have to resize it.
            if fname == 'disk.swap':
                continue

            compression = info['type'] not in NO_COMPRESSION_TYPES
            libvirt_utils.copy_image(from_path, img_path, host=dest,
                                     on_execute=on_execute,
                                     on_completion=on_completion,
                                     compression=compression)

Obviously, the entire recognition of the swap path (fname) is not working in the case of cinder-volumes for swap

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

2 followers

Stats

Asked: 2018-02-24 10:01:51 -0600

Seen: 420 times

Last updated: Mar 18 '18