Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

An OpenStack environment will usually have many compute nodes. In order to avoid unnecessary network traffic, the first time an image is used on a compute node that image is fetched from Glance into the /var/log/nova/_base directory. The boot disk for your instance is then created as a qcow2 disk using the image in _base as a backing file. This allows even large disks to be provisioned very quickly once the image has been downloaded to the compute node, and it means the image only needs to be fetched once per compute node rather than once per instance.

You can use the qemu-img info command to examine a disk file and see what's using for a backing file. For example, on my system:

# qemu-img info /var/lib/nova/instances/0debe180-ac32-49a4-9caa-5eebe834eef5/disk 
image: /var/lib/nova/instances/0debe180-ac32-49a4-9caa-5eebe834eef5/disk
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 218M
cluster_size: 65536
backing file: /var/lib/nova/instances/_base/7d9817593f5a6f87fafc930737d8a55eb6932b7e

The disk file is the boot image for the instance, and disk.local is the "ephemeral" disk for the instance.

If you're running OpenStack in a KVM environment, you can see the devices actually attached to your instance with the virsh domblklist command. Run virsh list to get a list of instances:

# virsh list
 Id    Name                           State
----------------------------------------------------
 12    instance-0000003b              running
 13    instance-0000003c              running

And run domblkinfo on a specific instance:

# virsh domblklist 12
Target     Source
------------------------------------------------
vda        /var/lib/nova/instances/0debe180-ac32-49a4-9caa-5eebe834eef5/disk

This confirms that the disk file is used as the boot image for that instance.

An OpenStack environment will usually have many compute nodes. In order to avoid unnecessary network traffic, the first time an image is used on a compute node that image is fetched from Glance into the /var/log/nova/_base/var/lib/nova/instances/_base directory. The boot disk for your instance is then created as a qcow2 disk using the image in _base as a backing file. This allows even large disks to be provisioned very quickly once the image has been downloaded to the compute node, and it means the image only needs to be fetched once per compute node rather than once per instance.

You can use the qemu-img info command to examine a disk file and see what's using for a backing file. For example, on my system:

# qemu-img info /var/lib/nova/instances/0debe180-ac32-49a4-9caa-5eebe834eef5/disk 
image: /var/lib/nova/instances/0debe180-ac32-49a4-9caa-5eebe834eef5/disk
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 218M
cluster_size: 65536
backing file: /var/lib/nova/instances/_base/7d9817593f5a6f87fafc930737d8a55eb6932b7e

The disk file is the boot image for the instance, and disk.local is the "ephemeral" disk for the instance.

If you're running OpenStack in a KVM environment, you can see the devices actually attached to your instance with the virsh domblklist command. Run virsh list to get a list of instances:

# virsh list
 Id    Name                           State
----------------------------------------------------
 12    instance-0000003b              running
 13    instance-0000003c              running

And run domblkinfo on a specific instance:

# virsh domblklist 12
Target     Source
------------------------------------------------
vda        /var/lib/nova/instances/0debe180-ac32-49a4-9caa-5eebe834eef5/disk

This confirms that the disk file is used as the boot image for that instance.