Ask Your Question

Revision history [back]

So this isn't weird at all, I'd even say that it is expected behavior. I assume that your OpenStack installation is based on QEMU and that you have default configuration.

You start a fresh VM. At the very beginning it uses 0 MB of memory. During boot time, while loading OS and configuring all the stuff, it starts to reserve more and more memory. Basically when VM wants more memory QEMU is calling malloc so that it allocates block of N bytes in memory for this particular VM. Let's say that after first boot your VM is using 400 MB and this is exactly (circa about) what QEMU allocated using malloc. If you live migrate such VM back and forth it should take the same time. However, if you restart a VM, during the process of restarting it might request more memory, so that QEMU will call malloc again. The point here is that (by default) once memory is requested by VM it is never relinquished to the host. It means that after reboot your VM might be using (from VM perspective) 400 MB of memory again, but in fact it will have, e.g., 800 MB of memory allocated, due to higher demand for memory during the reboot. This means that QEMU needs to send 800 MB of memory when live migrating VM to another host, not 400 MB as it was before reboot. This is IMO why live migration after the reboot takes longer. You can also observe network traffic and amount of data transferred and try to confirm my theory (using e.g. iftop).

You can also try to configure ballooning to automatically relinquish memory to the host if you really need it.