openstack tries to use cache=none when using glusterfs, when it should be writethrough
Hi all,
I noticed in the source code for LibvirtDriver that there is already a statement put in to use cache=writethrough for nova-compute users using glusterfs. For some reason, my configuration isn't catching it and it's causing kvm to not able to load the VM.
root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# cat libvirt.xml
<domain type="kvm">
<uuid>648d9178-3099-466e-861e-0476b3e0aa24</uuid>
<name>instance-0000002c</name>
<memory>2097152</memory>
<vcpu>1</vcpu>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">OpenStack Foundation</entry>
<entry name="product">OpenStack Nova</entry>
<entry name="version">2013.1</entry>
<entry name="serial">0bd50aa7-214c-fe5b-662c-0025902d7f96</entry>
<entry name="uuid">648d9178-3099-466e-861e-0476b3e0aa24</entry>
</system>
</sysinfo>
<os>
<type>hvm</type>
<boot dev="hd"/>
<smbios mode="sysinfo"/>
</os>
<features>
<acpi/>
<apic/>
</features>
<clock offset="utc">
<timer name="pit" tickpolicy="delay"/>
<timer name="rtc" tickpolicy="catchup"/>
</clock>
<cpu mode="host-model" match="exact"/>
<devices>
<disk type="file" device="cdrom">
<driver name="qemu" type="qcow2" cache="none"/>
<source file="/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24/disk"/>
<target bus="ide" dev="hda"/>
</disk>
<disk type="file" device="disk">
<driver name="qemu" type="raw" cache="none"/>
<source file="/var/lib/nova/mnt/6408bb2f62fd5c6a9277a4fcfd55f3a6/volume-4bc652a3-97ea-44a2-b12a-1e38993010a3"/>
<target bus="virtio" dev="vda"/>
<serial>4bc652a3-97ea-44a2-b12a-1e38993010a3</serial>
</disk>
<interface type="bridge">
<mac address="fa:16:3e:64:a2:0a"/>
<model type="virtio"/>
<source bridge="br100"/>
<filterref filter="nova-instance-instance-0000002c-fa163e64a20a"/>
</interface>
<serial type="file">
<source path="/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24/console.log"/>
</serial>
<serial type="pty"/>
<input type="tablet" bus="usb"/>
<graphics type="vnc" autoport="yes" keymap="en-us" listen="10.16.201.1"/>
</devices>
</domain>
Here's me manually trying to start the VM using that xml file.
root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# virsh define libvirt.xml
Domain instance-0000002c defined from libvirt.xml
root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# virsh start instance-0000002c
error: Failed to start domain instance-0000002c
error: Unable to read from monitor: Connection reset by peer
After I change it to writethrough, it works fine:
root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# sed -i 's/cache="none"/cache="writethrough"/g' libvirt.xml
root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# virsh define libvirt.xml
Domain instance-0000002c defined from libvirt.xml
root@lab1:/var/lib/nova/instances/648d9178-3099-466e-861e-0476b3e0aa24# virsh start instance-0000002c
Domain instance-0000002c started
Does anyone know why it isn't catching that I'm using GlusterFS becaues it carely says it would in the source code?
/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py
def disk_cachemode(self):
if self._disk_cachemode is None:
# We prefer 'none' for consistent performance, host crash
# safety & migration correctness by avoiding host page cache.
# Some filesystems (eg GlusterFS via FUSE) don't support
# O_DIRECT though. For those we fallback to 'writethrough'
# which gives host crash safety, and is safe for migration
# provided the filesystem is cache coherant (cluster filesystems
# typically are, but things like NFS are not).
self._disk_cachemode = "none"
if not self._supports_direct_io(CONF.instances_path):
self._disk_cachemode = "writethrough"
return self._disk_cachemode