Ask Your Question

Revision history [back]

By default, Horizon uses a volume as the root file system. You can however configure Horizon so that the default is not a volume, but ephemeral storage backed by a file, logical volume or RBD on the compute node. You can also chamge the default by clicking a switch when launching the instance.

If you use the default, indeed /dev/vda or /dev/sda is created as a block storage volume. In this case, there are no regular files on the compute node that implement the VM’s root device. No compute node storage is used; it’s 100% volume.

Whether it’s cached on the compute node depends on the volume driver. Many drivers connect the compute node to the volume backend with iSCSI, for example. In this case, I believe caching takes place on the compute node. The RBD driver, on the other hand, connects the VM to the Ceph cluster using its own protocol, which probably doesn’t use the compute node’s cache.

Of course, the VM has its own cache.

By default, Horizon uses a volume as the root file system. You can however configure Horizon so that the default is not a volume, but ephemeral storage backed by a file, logical volume or RBD on the compute node. You can also chamge the default by clicking a switch when launching the instance.

If you use the default, indeed /dev/vda or /dev/sda is created as a block storage volume. In this case, there are no regular files on the compute node that implement the VM’s root device. No compute node storage is used; it’s 100% volume.

Whether it’s cached on the compute node depends on the volume driver. driver and perhaps the hypervisor. Many drivers connect the compute node to the volume backend with iSCSI, for example. In this case, I believe caching takes place on the compute node. node, except if the hypervisor opend the device file with the O_DIRECT flag. I don’t know what hypervisors usually do and if that’s configurable.

The RBD driver, on the other hand, connects the VM to the Ceph cluster using its own protocol, which probably doesn’t use the compute node’s cache.

Of course, the VM has its own cache.

By default, Horizon uses a volume as the root file system. You can however configure Horizon so that the default is not a volume, but ephemeral storage backed by a file, logical volume or RBD on the compute node. You can also chamge the default by clicking a switch when launching the instance.

If you use the default, indeed /dev/vda or /dev/sda is created as a block storage volume. In this case, there are no regular files on the compute node that implement the VM’s root device. No compute node storage is used; it’s 100% volume.

Whether it’s cached on the compute node depends on the volume driver and perhaps the hypervisor. Many drivers connect the compute node to the volume backend with iSCSI, for example. In this case, I believe caching takes place on the compute node, except if the hypervisor opend the device file with the O_DIRECT flag. I don’t know what hypervisors usually do and if that’s configurable.

EDIT: It is configurable with disk_cachemodes in nova.conf.

The RBD driver, on the other hand, connects the VM to the Ceph cluster using its own protocol, which probably doesn’t use the compute node’s cache.

Of course, the VM has its own cache.

By default, Horizon uses a volume as the root file system. You can however configure Horizon so that the default is not a volume, but ephemeral storage backed by a file, logical volume or RBD on the compute node. You can also chamge the default by clicking a switch when launching the instance.

If you use the default, indeed /dev/vda or /dev/sda is created as a block storage volume. In this case, there are no regular files on the compute node that implement the VM’s root device. No compute node storage is used; it’s 100% volume.

Whether it’s cached on the compute node depends on the volume driver and perhaps the hypervisor. Many drivers connect the compute node to the volume backend with iSCSI, for example. In this case, I believe caching takes place on the compute node, except if the hypervisor opend the device file with the O_DIRECT flag. I don’t know what hypervisors usually do and if that’s configurable.

EDIT: It If you use a libvirt hypervisor, it is configurable with disk_cachemodes in nova.conf.

The RBD driver, on the other hand, connects the VM to the Ceph cluster using its own protocol, which probably doesn’t use the compute node’s cache.

Of course, the VM has its own cache.