Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Hi there,

my cinder.conf look like

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
transport_url = rabbit://openstack:blabla@controller
my_ip = 10.0.88.31
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:blabla@controller/cinder

#[oslo_messaging_rabbit]
#rabbit_host = controller
#rabbit_userid = openstack
#rabbit_password = blabla

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = blabla

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
lvm_type = default

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

Hi there,

my cinder.conf look like

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
transport_url = rabbit://openstack:blabla@controller
my_ip = 10.0.88.31
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:blabla@controller/cinder

#[oslo_messaging_rabbit]
#rabbit_host = controller
#rabbit_userid = openstack
#rabbit_password = blabla

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = blabla

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
lvm_type = default

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

EDIT: After some further testing around, behavior is obviously dependent of the config item of nova.conf in the libvirt section.In my config, each compute host is in parallel also holding a cinder-volume service based on lvm. My libvirt section in nova.conf looks like

[libvirt]
images_type = lvm
images_volume_group = cinder-volumes 
live_migration_tunnelled = True

ie using the cinder volume group also for nova ephermal disks. This obviously causes nova to create also swap as a lvm volume. If this assumption would be right, why is nova then still searching the swap volume in /var/lib/nova/instances/<uuid> ? Still not found a way how to change that ....

Hi there,

my cinder.conf look like

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
transport_url = rabbit://openstack:blabla@controller
my_ip = 10.0.88.31
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:blabla@controller/cinder

#[oslo_messaging_rabbit]
#rabbit_host = controller
#rabbit_userid = openstack
#rabbit_password = blabla

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = blabla

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
lvm_type = default

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

EDIT: After some further testing around, behavior is obviously dependent of the config item of nova.conf in the libvirt section.In my config, each compute host is in parallel also holding a cinder-volume service based on lvm. My libvirt section in nova.conf looks like

[libvirt]
images_type = lvm
images_volume_group = cinder-volumes 
live_migration_tunnelled = True

ie using the cinder volume group also for nova ephermal disks. This obviously causes nova to create also swap as a lvm volume. If this assumption would be right, why is nova then still searching the swap volume in /var/lib/nova/instances/<uuid> ? Still not found a way how to change that ....

EDIT AND SOLVED: The issue has been solved by adding the (pretty fundamental) line into nova.conf

compute_driver = libvirt.LibvirtDriver

This in conjunction with the config in the libvirt section made the trick ....

Hi there,

my cinder.conf look like

[DEFAULT]
rootwrap_config = /etc/cinder/rootwrap.conf
api_paste_config = /etc/cinder/api-paste.ini
iscsi_helper = tgtadm
volume_name_template = volume-%s
volume_group = cinder-volumes
auth_strategy = keystone
state_path = /var/lib/cinder
lock_path = /var/lock/cinder
volumes_dir = /var/lib/cinder/volumes
auth_strategy = keystone
transport_url = rabbit://openstack:blabla@controller
my_ip = 10.0.88.31
enabled_backends = lvm
glance_api_servers = http://controller:9292

[database]
connection = mysql+pymysql://cinder:blabla@controller/cinder

#[oslo_messaging_rabbit]
#rabbit_host = controller
#rabbit_userid = openstack
#rabbit_password = blabla

[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = blabla

[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = tgtadm
lvm_type = default

[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

EDIT: After some further testing around, behavior is obviously dependent of the config item of nova.conf in the libvirt section.In my config, each compute host is in parallel also holding a cinder-volume service based on lvm. My libvirt section in nova.conf looks like

[libvirt]
images_type = lvm
images_volume_group = cinder-volumes 
live_migration_tunnelled = True

ie using the cinder volume group also for nova ephermal disks. This obviously causes nova to create also swap as a lvm volume. If this assumption would be right, why is nova then still searching the swap volume in /var/lib/nova/instances/<uuid> ? Still not found a way how to change that ....

EDIT AND SOLVED: The issue has been EDIT:

Further retesting solved by adding the (pretty fundamental) line into nova.confthis now only for flavors without swap at all. (overlooked that the chosen instance ran on a flavor without swap when testing - sorry)

With instances having running on a flavor with swap files, problem ist still persisting.

Relevant python code in /usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py is

compute_driver = libvirt.LibvirtDriver
for info in disk_info:
            # assume inst_base == dirname(info['path'])
            img_path = info['path']
            fname = os.path.basename(img_path)
            from_path = os.path.join(inst_base_resize, fname)

            # We will not copy over the swap disk here, and rely on
            # finish_migration to re-create it for us. This is ok because
            # the OS is shut down, and as recreating a swap disk is very
            # cheap it is more efficient than copying either locally or
            # over the network. This also means we don't have to resize it.
            if fname == 'disk.swap':
                continue

            compression = info['type'] not in NO_COMPRESSION_TYPES
            libvirt_utils.copy_image(from_path, img_path, host=dest,
                                     on_execute=on_execute,
                                     on_completion=on_completion,
                                     compression=compression)

This Obviously, the entire recognition of the swap path (fname) is not working in conjunction with the config in the libvirt section made the trick ....

the case of cinder-volumes for swap