Ask Your Question

cyberang3l's profile - activity

2016-11-08 23:59:06 -0600 received badge  Famous Question (source)
2016-07-14 09:41:59 -0600 received badge  Taxonomist
2015-08-04 05:44:02 -0600 received badge  Popular Question (source)
2015-06-14 11:04:18 -0600 received badge  Notable Question (source)
2015-06-14 11:04:18 -0600 received badge  Popular Question (source)
2014-11-20 12:49:12 -0600 received badge  Famous Question (source)
2014-04-11 20:23:44 -0600 received badge  Notable Question (source)
2014-03-30 23:46:56 -0600 received badge  Popular Question (source)
2014-03-30 23:23:21 -0600 received badge  Famous Question (source)
2014-03-24 05:27:49 -0600 commented answer Any good guide on how to create AMI Ubuntu/Debian images?

Thanks for the reply!

The logilab.org link looks good (I will try it), but do you have any clue if it is possible to get the same results by using debootstrap. It should be, but the actual question is how. This is convenient when you want to automate image building process. I tried this: http://blog.justinsb.com/blog/2012/04... but my VM would not boot when I extracted the kernel and initrd.

My end goal is that I want to create an image and compile a custom driver which I need it to be loaded when the system boots.

Can you also post the instructions you have whenever you find it?

2014-03-21 15:25:08 -0600 asked a question Any good guide on how to create AMI Ubuntu/Debian images?

I want to create an AMI Ubuntu image for OpenStack (that kernel and initrd is loaded externally).

I have found lots of material explaining how to create a qcow image (Here is one: http://docs.openstack.org/image-guide... ), but I am not aware of any good source explaining how to create an AMI style image.

Any good tutorial accomplishing this task or anyone to explain the process?

2013-12-28 10:27:00 -0600 received badge  Notable Question (source)
2013-12-10 12:34:19 -0600 commented answer How to force nova to boot an updated glance image?

"Another way would be, instead of updating the image, just create a new one pointing to that --location http://xxx.img and start instances based on it." <- This is what I am actually doint at the moment, so I was just looking if there is any way to avoid creating a new image. Since there is not, thanks for the reply and suggestions!!

2013-12-10 11:12:04 -0600 commented answer How to force nova to boot an updated glance image?

image. The problem comes when some virtual machines already use the base image so I cannot remove it in this case.

2013-12-10 11:11:01 -0600 commented answer How to force nova to boot an updated glance image?

I add the images in a web server and then I add them in glance using the --location http://xxx.img. So when I want to change the image, I simply change it in the web server. If I remove the cached base image, nova will re-download the image from the "location" and it works. No need to create a new

2013-12-10 11:00:48 -0600 received badge  Popular Question (source)
2013-11-28 15:28:00 -0600 answered a question How to force nova to boot an updated glance image?

Thanks Stuart McLaren, that solved my question.

2013-11-28 15:27:57 -0600 answered a question How to force nova to boot an updated glance image?

Well, it's good to know then :)

Thanks for the reply!

2013-11-28 15:07:52 -0600 asked a question How to force nova to boot an updated glance image?

I have an image, on which I make some modifications (adding some files and packages) and eventually I run the following command to update it:

glance image-update --size <the new="" size="" as="" taken="" from="" stat="" output=""> <uuid>

This will update the size and the updated_at properties of the image, but the next time I try to boot a VM based on the updated image, it will still boot from the base image cached in the common storage (the previous one which is not updated).

How can I force nova to re-download the latest image from the glance server?

2013-11-28 08:53:08 -0600 asked a question How to force nova to boot an updated glance image?

I have an image, on which I make some modifications and eventually I run the following command to update it:

glance image-update --size <the new size as taken from stat output> <UUID>

This will update the size and the updated_at properties of the image, but the next time I try to boot a VM based on the updated image, it will still boot from the base image cached in the common storage (the previous one which is not updated).

How can I force nova to re-download the latest image from the glance server?

2013-08-19 10:31:39 -0600 received badge  Nice Question (source)
2013-08-19 09:33:01 -0600 commented answer What's the best guide to follow to install Openstack using Xen and Libvirt?

I haven't found any either and what I realized so far after an extensive research, is that XEN+Libvirt+OpenStack is pretty much unusable. Even XCP+OpenStack has many advertised non working features.

2013-07-16 13:36:16 -0600 answered a question Why live migration when using XCP and openstack has so much downtime?

Hi John,

Can you please elaborate a little bit more or direct me to documentation on what you mentioned in your previous post about using local storage instead of shared, with all shared storage managed by Cinder?

2013-07-16 12:58:30 -0600 answered a question Why live migration when using XCP and openstack has so much downtime?

And what fixes the problem for me is this:

--- nova-orig/virt/xenapi/vmops.py 2013-07-15 14:21:05.532868954 +0200 +++ nova/virt/xenapi/vmops.py 2013-07-16 14:54:10.865301101 +0200 @@ -1727,7 +1727,7 @@ host_ref = self._get_host_opaque_ref(context, destination_hostname) self._session.call_xenapi("VM.pool_migrate", vm_ref, - host_ref, {}) + host_ref, { "live": "true" }) post_method(context, instance, destination_hostname, block_migration) except Exception:

2013-07-16 12:19:24 -0600 answered a question Why live migration when using XCP and openstack has so much downtime?

And I think I just found the problem:

If I initiate the migration from the console like this: "xe vm-migrate vm=instance-0000004a host=xcpcompute1" (notice there is not live=true) then I get the same behaviour as the one I get when I migrate using openstack.

So I guess that openstack is not sending a signal to perform a "live" migration but just a migration.

2013-07-16 12:05:11 -0600 answered a question Why live migration when using XCP and openstack has so much downtime?

I used the statistical breakdowns of iperf to the hypervisors to check the number of packets during the migration and what I realized is that either when I migrate with the xe command or using openstack, the packets that are 1426-1500+ bytes are increasing for the same amount of time.

I guess this is when the contents of the memory are transfered to the other hypervisor so we send large TCP packets to finish as quickly as possible.

So the migration process takes the same amount of time no matter how it is initiated (reasonable conclusion).

The difference is that for this amount of time, when the migration is issued using openstack, the vm is inaccessible while when issuing the migration using the xe command there is no such problem.

2013-07-15 18:51:02 -0600 answered a question Why live migration when using XCP and openstack has so much downtime?

Thanks for the answer John,

When talking about using local storage I guess you mean the block migration which needs support for the XenMotion feature, right? If this is the case, as I said I used xcp from the debian repository and as far as I read it is not updated to XCP 1.6 yet so I cannot use XenMotion. If not, can you please post some links to documentation for using local storage, with all shared storage managed by Cinder? How can this be configured?

A few more words about my setup:

My first attempt was to setup XCP with Quantum and OVS but I concluded that this is not supported at the moment and it will be first supported in the Havana release. I made a question for clarification here but I didn't get an answer: https://answers.launchpad.net/neutron...

Then I moved on and used Nova Network instead and FlatDHCP with bridges (no OVS) as described in the official documentation here: http://docs.openstack.org/grizzly/ope...

This works as advertised :)

Next step is the live migration which I just achieved today.

As for the help on debugging/testing, I can do that since I have the setup and I need to work with this for a project. If you have any request for more specific details, please let me know.

2013-07-15 16:03:44 -0600 asked a question Why live migration when using XCP and openstack has so much downtime?

In my setup I use two XCP servers on top of Debian Wheezy (xcp-xapi package) and the openstack nova compute VM is running on Ubuntu 12.04 with OpenStack Grizzly.

I configured live migration based on the documentation and I had to apply some patches to solve these issues: https://bugs.launchpad.net/nova/+bug/... https://bugs.launchpad.net/nova/+bug/... https://bugs.launchpad.net/nova/+bug/...

Eventually the migration works, but I experience very long downtime. I made a simple script running in a while loop that it will simply print the time and ping from within the vm-on-migration to another pingable IP and this is the result:

Mon Jul 15 09:45:50 MDT 2013 64 bytes from 192.168.30.4: seq=0 ttl=64 time=0.523 ms Mon Jul 15 09:45:51 MDT 2013 64 bytes from 192.168.30.4: seq=0 ttl=64 time=0.504 ms Mon Jul 15 09:45:52 MDT 2013 64 bytes from 192.168.30.4: seq=0 ttl=64 time=0.520 ms Mon Jul 15 09:48:58 MDT 2013 64 bytes from 192.168.30.4: seq=0 ttl=64 time=0.569 ms Mon Jul 15 09:48:59 MDT 2013 64 bytes from 192.168.30.4: seq=0 ttl=64 time=0.510 ms Mon Jul 15 09:49:00 MDT 2013 64 bytes from 192.168.30.4: seq=0 ttl=64 time=0.484 ms

As you see one ping before the initiation of the migration is at 09:45:52 and the next one comes more than 3 minutes after at 09:48:58

+--------------------------------------+------+----------------------+-------------------+----------------------+--------+ root@controller:~# nova list --fields name,host,instance_name,networks,status +--------------------------------------+------+----------------------+-------------------+----------------------+--------+ | ID | Name | Host | Instance Name | Networks | Status | +--------------------------------------+------+----------------------+-------------------+----------------------+--------+ | d0958165-767e-425e-a9cd-ff7f501be76d | KVM1 | kvmcompute1 | instance-00000037 | novanet=192.168.30.4 | ACTIVE | | b69eeb2d-7737-40fb-a5b8-a71a582d8f73 | XCP1 | openstackxcpcompute2 | instance-00000044 | novanet=192.168.30.2 | ACTIVE | +--------------------------------------+------+----------------------+-------------------+----------------------+--------+ root@controller:~# nova live-migration b69eeb2d-7737-40fb-a5b8-a71a582d8f73 openstackxcpcompute1 root@controller:~# nova list --fields name,host,instance_name,networks,status +--------------------------------------+------+----------------------+-------------------+----------------------+-----------+ | ID | Name | Host | Instance Name | Networks | Status | +--------------------------------------+------+----------------------+-------------------+----------------------+-----------+ | d0958165-767e-425e-a9cd-ff7f501be76d | KVM1 | kvmcompute1 | instance-00000037 | novanet=192.168.30.4 | ACTIVE | | b69eeb2d-7737-40fb-a5b8-a71a582d8f73 | XCP1 | openstackxcpcompute2 | instance-00000044 | novanet=192.168.30.2 | MIGRATING | +--------------------------------------+------+----------------------+-------------------+----------------------+-----------+ root@controller:~# nova list --fields name,host,instance_name,networks,status +--------------------------------------+------+----------------------+-------------------+----------------------+--------+ | ID | Name | Host | Instance Name | Networks | Status | +--------------------------------------+------+----------------------+-------------------+----------------------+--------+ | d0958165-767e-425e-a9cd-ff7f501be76d | KVM1 | kvmcompute1 | instance-00000037 | novanet=192.168.30.4 | ACTIVE | | b69eeb2d-7737-40fb-a5b8-a71a582d8f73 | XCP1 | openstackxcpcompute1 | instance-00000044 | novanet=192.168.30.2 | ACTIVE | +--------------------------------------+------+----------------------+-------------------+----------------------+--------+

If I migrate exactly the same vm, using the command "xe vm-migrate vm=instance-00000044 host=xcpcompute2 live=true" directly from the hypervisor's console, the downtime is only 3 seconds as you see here:

Mon Jul 15 09:40:26 MDT 2013 64 bytes from 192.168.30.4: seq=0 ttl=64 time=0.492 ms Mon Jul 15 09:40:27 MDT 2013 64 bytes from 192.168.30.4: seq=0 ttl=64 time=0.610 ms Mon Jul 15 09:40:28 MDT 2013 64 bytes from 192.168.30.4: seq=0 ttl=64 time=0.753 ms Mon Jul 15 ... (more)

2013-07-12 07:04:38 -0600 received badge  Teacher (source)
2013-07-10 05:18:06 -0600 answered a question how to put a local file into instance in openstack?

The metadata service might be able to help you.

I haven't tried it myself, but I guess you will then need to run a startup script (powershell?) in the windows machine to check the "user-data".

http://docs.openstack.org/trunk/openstack-compute/admin/content/metadata-service.html http://docs.openstack.org/trunk/openstack-compute/admin/content/instance-data.html

Take a look also to the answers here if you try this approach: http://serverfault.com/questions/17685/how-to-get-an-ec2-windows-machines-instance-id-into-a-batch-variable

2013-07-03 22:26:45 -0600 answered a question What is the use of the Dead VLAN tag 4095 on quantum openvswitch plugin?

Thanks Salvatore, it makes sense now.

2013-07-03 22:25:09 -0600 answered a question What is the use of the Dead VLAN tag 4095 on quantum openvswitch plugin?

Thanks Salvatore Orlando, that solved my question.

2013-07-03 14:23:04 -0600 asked a question What is the use of the Dead VLAN tag 4095 on quantum openvswitch plugin?

When I run "ovs-vsctl show", sometimes I see some ports with VLAN tag 4095

After doing some search I couldn't find much references on it except the source file "ovs_quantum_agent.py" which looks like this:

A placeholder for dead vlans.

DEAD_VLAN_TAG = "4095" ... ... def port_dead(self, port): '''Once a port has no binding, put it on the "dead vlan".

    :param port: a ovs_lib.VifPort object.'''
    self.int_br.set_db_attribute("Port", port.port_name, "tag",
                                 DEAD_VLAN_TAG)
    self.int_br.add_flow(priority=2, in_port=port.ofport, actions="drop")

The question is why '''Once a port has no binding, put it on the "dead vlan" and not destory it?

2013-07-01 13:08:20 -0600 received badge  Student (source)
2013-06-28 09:45:30 -0600 answered a question Removing compute node from database

Same question here: Any update or still need to delete the row manually from nova database?

2013-06-27 08:10:42 -0600 received badge  Famous Question (source)
2013-06-19 13:25:11 -0600 received badge  Notable Question (source)
2013-06-14 03:53:36 -0600 received badge  Popular Question (source)
2013-06-12 15:14:18 -0600 answered a question Libvirt and Xen: No consolelog, no qcow2, no boot, no VNC...

I face the same problem and I created a bug with some more information on this here: https://bugs.launchpad.net/nova/+bug/...

2013-06-11 08:24:08 -0600 received badge  Supporter (source)
2013-06-11 07:36:43 -0600 asked a question What's the best guide to follow to install Openstack using Xen and Libvirt?

I am a fairly new user of OpenStack using Ubuntu 12.04 with the cloud archive enabled to get Grizzly. I have managed to run a distributed node setup with KVM and libvirt by following this guide, but now that I try to setup a compute node with Xen and Libvirt I cannot make it work. I know that the supported and suggested way for Xen hypervisor is to either use XCP or Citrix XenServer, but the documentation says that it is possible to manage Xen using libvirt (This would be necessary for any Xen-based system that isn't using the XCP toolstack, such as SUSE Linux or Oracle Linux) and I need to try this solution instead (I tried XCP as well and I didn't manage to make it work properly on top of Ubuntu either, but even if I did, I would eventually try to use libvirt).

When I try to boot a CirrOS instance in the Xen based compute node (nova boot --image <id-of-cirros-image> --flavor m1.small --availability-zone nova:xennode1 TestXenHost) the spawning fails and instance boots in one of the KVM nodes. The error I get from the logs is this: error from xen daemon: (xend.err 'Device /dev/xvdp (51952, tap) is already connected.')

I found some people talking about blktap module, but I installed blktap-utils and blktap-dkms packages, and made sure the blktap module is running (lsmod).

Here you can find some pastebin links with the nova configuration files and logs: nova.conf, nova-compute.conf, nova-compute.log, libvirtd.log, xend.log

Can anyone say what is wrong here and if there is any rescue?

Thank you!