Ask Your Question

vincent's profile - activity

2016-09-20 12:19:45 -0600 received badge  Famous Question (source)
2016-09-20 12:19:45 -0600 received badge  Notable Question (source)
2016-08-03 16:19:28 -0600 received badge  Famous Question (source)
2016-05-25 03:40:30 -0600 received badge  Popular Question (source)
2016-05-03 08:11:50 -0600 received badge  Notable Question (source)
2016-05-03 08:11:50 -0600 received badge  Famous Question (source)
2016-04-26 00:02:24 -0600 asked a question nova libvirt driver instance stuck on "spawning"

This is a Liberty setup that I tried to keep as simple as possible, I'm booting the cirros image, the controller ships it to the compute node but I immediately see the following in the nova-compute.log

2016-04-26 12:48:55.128 17437 INFO nova.compute.manager [-] [instance: b1e5b06c-5ba7-410c-982c-ee9769b4a7c9] VM Started (Lifecycle Event)
2016-04-26 12:48:55.128 5906 INFO nova.compute.manager [req-372f2045-39c6-4d44-adc4-0657e1243544 - - - - -] [instance: b1e5b06c-5ba7-410c-982c-ee9769b4a7c9] VM Started (Lifecycle Event)
2016-04-26 12:48:55.190 17437 INFO nova.compute.manager [req-8775b7ea-71d2-4366-97ba-3b0ccb7947b2 - - - - -] [instance: b1e5b06c-5ba7-410c-982c-ee9769b4a7c9] VM Paused (Lifecycle Event)
2016-04-26 12:48:55.190 5906 INFO nova.compute.manager [req-372f2045-39c6-4d44-adc4-0657e1243544 - - - - -] [instance: b1e5b06c-5ba7-410c-982c-ee9769b4a7c9] VM Paused (Lifecycle Event)
2016-04-26 12:48:55.307 17437 INFO nova.compute.manager [req-8775b7ea-71d2-4366-97ba-3b0ccb7947b2 - - - - -] [instance: b1e5b06c-5ba7-410c-982c-ee9769b4a7c9] During sync_power_state the instance has a pending task (spawning). Skip.
2016-04-26 12:48:55.307 5906 INFO nova.compute.manager [req-372f2045-39c6-4d44-adc4-0657e1243544 - - - - -] [instance: b1e5b06c-5ba7-410c-982c-ee9769b4a7c9] During sync_power_state the instance has a pending task (spawning). Skip.

I searched for hints in forums, most folks were pointing at issues with space issue, I have over 300MB free, but nonetheless I created a 10GB tmpfs just for /var/lib/nova/instances, so I don't anticipate space being the issue.

nova service-list

[root@controller misc_openstack]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | controller | internal | enabled | up    | 2016-04-26T04:59:41.000000 | -               |
| 2  | nova-consoleauth | controller | internal | enabled | up    | 2016-04-26T04:59:41.000000 | -               |
| 3  | nova-scheduler   | controller | internal | enabled | up    | 2016-04-26T04:59:41.000000 | -               |
| 4  | nova-cert        | controller | internal | enabled | up    | 2016-04-26T04:59:41.000000 | -               |
| 6  | nova-compute     | compute    | nova     | enabled | up    | 2016-04-26T04:59:48.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

neutron agent-list

[root@controller misc_openstack]# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host       | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 5f1e0466-e45e-4513-ad07-93820b979bd1 | Linux bridge agent | controller | :-)   | True           | neutron-linuxbridge-agent |
| 7cd7af80-eb4f-4d49-b95d-991cb6760d73 | Linux bridge agent | compute    | :-)   | True           | neutron-linuxbridge-agent |
| 9c4e4485-5d43-4c4f-984f-f8f92d8f9d35 | Metadata agent     | controller | :-)   | True           | neutron-metadata-agent    |
| b167a59f-c750-4c66-9059-a3b615166b10 | DHCP agent         | controller | :-)   | True           | neutron-dhcp-agent        |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
2016-03-22 10:02:03 -0600 received badge  Popular Question (source)
2016-03-18 06:50:28 -0600 received badge  Notable Question (source)
2016-03-05 15:17:54 -0600 commented answer app-catalog and app-catalog-ui installation instruction

Thanks Eduardo! I'll give this a try soon

2016-03-04 21:28:29 -0600 asked a question app-catalog and app-catalog-ui installation instruction

The app-catalog and app-catalog-ui repos have close to no installation instruction (the only useful ones are for devstack), are there instructions for adding app-catalog-ui components to an existing Liberty Dashboard?

2016-01-26 15:21:53 -0600 received badge  Famous Question (source)
2016-01-19 04:59:24 -0600 received badge  Popular Question (source)
2016-01-18 21:33:42 -0600 commented question Clarification on docs for self service connectivity

Update: Oddly enough I can get connectivity if I create a unicast vxlan channel between the controller and compute node, which leads me to think that the connectivity issue I encountered was with the obscure l2population and vxlan drivers, does anyone have any insights on how the vxlan net is setup

2016-01-10 17:01:28 -0600 commented question OpenStack-Docker driver failed

Does the compute service come up when you use a different driver? Do you get something in the nova-compute.log if you set verbose=True in the DEFAULT section?

2016-01-10 16:57:46 -0600 received badge  Notable Question (source)
2016-01-10 16:57:46 -0600 received badge  Popular Question (source)
2016-01-08 16:07:32 -0600 commented question How to force Neutron DHCP to release IPs?

So far the only workaround I've found to this is to delete the ports manually using

neutron port-delete <port_id>

wondering if there are other/cleaner ways?

2016-01-08 14:40:32 -0600 asked a question How to force Neutron DHCP to release IPs?

Any help on a way to release unused IPs to be reused would be greatly appreciated. Even release _all_ of them in face as I currently only have a dev setup not production.

For some reason the dhcp server isn't automatically releasing the addresses when the instance ends (I'm using nova-docker, so that's another story for now), and my allocation pools saturates.

I've tried messing with the files under /var/lib/neutron/<namespace>/{leases, hosts, addn_hosts} but the entries seem to get recovered from the db upon restart of the service anyway. I'd much prefer to not have to do that in the db itself, but I'll take any hint you may have

Thanks

2016-01-08 10:39:00 -0600 answered a question linux bridge vs ovs

Might not be the type of answer you were looking for, but performance might be one, check out this interesting study for details http://www.opencloudblog.com/?p=96

2016-01-08 01:02:28 -0600 answered a question Configure Docker in OpenStack

Unfortunately I don't have a quick fix for you, but I can tell you that I was able to run nova-docker using the Liberty release, just in case it's an option for you to try that one out. It worked pretty much as documented (modulo networking issues that I'm still coping with, but I can launch an instance without problems). Did you set verbose to True in your /etc/nova/nova.conf [DEFAULT] section?

One thing you could try if you're adventurous is to peak at the nova-docker Python code and do some debugging by writing out some of the variables/data to a file for inspection. The main method that kicks the whole instance is "spawn" that is in /usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py (obviously could be different Python for you)

Good luck

2016-01-08 00:52:00 -0600 asked a question Clarification on docs for self service connectivity

I followed the install guide for Liberty on CentOS to the letter and can boot an instance on a self-service network, but I can _not_ communicate with that instance (other than directly through its netns on the compute node itself).

Looking at the self-service connectivity diagram at

http://docs.openstack.org/liberty/ins...

I traced over the network connectivity and all interfaces I see in my setup match the one in the docs with one EXCEPTION... what I do not see is the qg to public-bridge port tap pair going from the qrouter namespace, in my case my qrouter namespace shows only the private vNIC interface that is connected to the private bridge (see http://paste.openstack.org/show/483298/), so my public and private networks are essentially isolated.

Can you shed some light on what may be going on?

None of the neutron logs show relevant issues. I have also tried creating the veth pair manually and inserting them but somehow it didn't seem to be sufficient to route the traffic from the public to private nets.

I'm also attaching the output of iptables -L -t nat on the controller node as well as against the router namespace http://paste.openstack.org/show/483299/), I'd be happy to provide more details as this has been bugging me for some time now.

Thanks!

2016-01-08 00:16:45 -0600 received badge  Enthusiast
2016-01-01 08:31:44 -0600 commented answer Nova Docker Networking problem

Precisely a year since this post went out, I'm running into the exact same behavior but with the Liberty release. The Install guide no longer directs to use openvswitch, but instead sets up a vxlan across the private and public networks, has anyone on this thread made "breakthrough" findings?