Ask Your Question

pawel-koniszewski's profile - activity

2019-03-22 07:18:04 -0600 received badge  Popular Question (source)
2019-03-22 07:18:04 -0600 received badge  Notable Question (source)
2016-08-10 11:59:58 -0600 received badge  Teacher (source)
2016-08-07 06:53:00 -0600 answered a question Weird VM migration behavior

So this isn't weird at all, I'd even say that it is expected behavior. I assume that your OpenStack installation is based on QEMU and that you have default configuration.

You start a fresh VM. At the very beginning it uses 0 MB of memory. During boot time, while loading OS and configuring all the stuff, it starts to reserve more and more memory. Basically when VM wants more memory QEMU is calling malloc so that it allocates block of N bytes in memory for this particular VM. Let's say that after first boot your VM is using 400 MB and this is exactly (circa about) what QEMU allocated using malloc. If you live migrate such VM back and forth it should take the same time. However, if you restart a VM, during the process of restarting it might request more memory, so that QEMU will call malloc again. The point here is that (by default) once memory is requested by VM it is never relinquished to the host. It means that after reboot your VM might be using (from VM perspective) 400 MB of memory again, but in fact it will have, e.g., 800 MB of memory allocated, due to higher demand for memory during the reboot. This means that QEMU needs to send 800 MB of memory when live migrating VM to another host, not 400 MB as it was before reboot. This is IMO why live migration after the reboot takes longer. You can also observe network traffic and amount of data transferred and try to confirm my theory (using e.g. iftop).

You can also try to configure ballooning to automatically relinquish memory to the host if you really need it.

2013-09-05 08:08:07 -0600 asked a question Large number of VMs (~1000) and libvirt exception

Hi,

I have a problem with uncaught exception in nova. I was not able to schedule one thousand of VMs, because the closer I was to schedule all needed VMs, the more VMs were throwing exceptions. This is what 'nova show' shows:

{u'message': u'AttributeError', u'code': 500, u'details': u'\'NoneType\' object has no attribute \'lookupByName\' File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 224, in decorated_function return function(self, context, args, *kwargs) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1224, in run_instance do_run_instance() File "/usr/lib/python2.7/dist-packages/nova/openstack/common/lockutils.py", line 242, in inner retval = f(args, *kwargs)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1223, in do_run_instance
admin_password, is_first_time, node, instance)
File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 879, in _run_instance
self._set_instance_error_state(context, instance[\'uuid\'])
File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__
self.gen.next() File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 790, in _run_instance
self._check_instance_exists(context, instance) File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 987, in _check_instance_exists
if self.driver.instance_exists(instance[\'name\']): File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 647, in instance_exists
self._lookup_by_name(instance_name) File "/usr/lib/python2.7/dist-packages/nova/virt/libvirt/driver.py", line 2354, in _lookup_by_name
return self._conn.lookupByName(instance_name) ', u'created': u'2013-09-03T11:57:38Z'}

Any suggestions on that?

2013-03-26 09:52:02 -0600 asked a question Heat checks version each time setup.py is executed

Why does Heat need to check its version (via .git files or pkg-info) every time setup.py is executed? I cant push heat submodule into my repository, also Pkg-info file is not provided in source files. I also don't want to install heat upstream every time. Is there good way to avoid that?