Ask Your Question

glenbot's profile - activity

2018-05-30 02:04:50 -0500 received badge  Nice Question (source)
2016-07-29 07:57:49 -0500 received badge  Famous Question (source)
2016-07-29 07:57:49 -0500 received badge  Notable Question (source)
2016-07-29 07:57:49 -0500 received badge  Popular Question (source)
2015-08-14 08:23:32 -0500 marked best answer Unable to create instances icehouse due to rabbit mq timeout

I am trying to create an instance using an Ubunut 14.04 cloud image but I am getting the following errors in nova-compute.log on the compute node.

2014-09-12 13:44:28.778 23779 DEBUG nova.compute.manager [-] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python2.7/dist-packages/nova/compute/manager.py:4823
2014-09-12 13:44:28.778 23779 DEBUG nova.compute.manager [-] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python2.7/dist-packages/nova/compute/manager.py:4827
2014-09-12 13:44:28.803 23779 DEBUG nova.compute.manager [-] [instance: 7f30b00f-27d4-4189-b682-feece60f4f14] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python2.7/dist-packages/nova/compute/manager.py:4841
2014-09-12 13:44:28.804 23779 DEBUG nova.compute.manager [-] [instance: a2708a17-f8ea-4cab-9a88-7173e8b2ce74] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python2.7/dist-packages/nova/compute/manager.py:4837
2014-09-12 13:44:28.805 23779 DEBUG nova.objects.instance [-] Lazy-loading `system_metadata' on Instance uuid ac1ea94c-f1af-4767-abb8-e72015454bfe obj_load_attr /usr/lib/python2.7/dist-packages/nova/objects/instance.py:519
2014-09-12 13:45:28.847 23779 ERROR nova.compute.manager [-] [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe] An error occurred while refreshing the network cache.
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe] Traceback (most recent call last):
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 4882, in _heal_instance_info_cache
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]     self._get_instance_nw_info(context, instance, use_slave=True)
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]   File "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1136, in _get_instance_nw_info
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]     instance)
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]   File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 95, in wrapped
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]     return func(self, context, *args, **kwargs)
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]   File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 390, in get_instance_nw_info
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]     result = self._get_instance_nw_info(context, instance)
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]   File "/usr/lib/python2.7/dist-packages/nova/network/api.py", line 406, in _get_instance_nw_info
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]     nw_info = self.network_rpcapi.get_instance_nw_info(context, **args)
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]   File "/usr/lib/python2.7/dist-packages/nova/network/rpcapi.py", line 222, in get_instance_nw_info
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]     host=host, project_id=project_id)
2014-09-12 13:45:28.847 23779 TRACE nova.compute.manager [instance: ac1ea94c-f1af-4767-abb8-e72015454bfe]   File "/usr/lib/python2.7/dist-packages/oslo/messaging/rpc/client.py", line 150, in call
2014-09-12 13:45:28 ...
(more)
2015-08-04 08:20:35 -0500 commented question Cinder volumes all of the sudden go read-only.

Also I think the kernel logs on the failure day pretty much sum it up. Can you take a look at this log and confirm? https://pastee.org/knmdh. Looks like the volume is corrupt? I rebooted controller node which is where the volumes are and running fsck shows no errors.

2015-08-04 08:13:31 -0500 commented question Cinder volumes all of the sudden go read-only.

Im pretty sure i'm using the LVM driver. My cinder config is pointing to a volume group and all the volumes are attached to it. The volume group is called cinder-volumes. Here is an output of lsblk - https://pastee.org/4zpw2

2015-08-02 12:28:19 -0500 commented question Cinder volumes all of the sudden go read-only.

Just wanted to bump this. I have checked the integrity of the drives serving up volumes with smartctl and the short tests passed. The drives don't seem to be degrading (they are very new). I have put cinder in DEBUG log mode although I haven't seen anything funny.

2015-08-02 12:28:19 -0500 received badge  Commentator
2015-07-27 01:50:56 -0500 asked a question Cinder volumes all of the sudden go read-only.

So we had a power outage at work that ran past the battery time so the Openstack cluster hard shut-down. Everything comes back up fine but now after a week of the cluster running the Cinder volumes automatically go into read-only mode (this has happened 3 times already). When I list the volumes in cinder they show that they are not read-only. The only way I have been able to resolve the issue is by logging into every VM and remounting the volumes. I checked the cinder logs and they have no errors or strange output. Anyone seen this before or have any idea where I can start debugging this?

2014-12-02 07:27:59 -0500 received badge  Nice Question (source)
2014-11-05 06:04:58 -0500 received badge  Famous Question (source)
2014-10-01 22:54:09 -0500 received badge  Notable Question (source)
2014-10-01 22:54:09 -0500 received badge  Popular Question (source)
2014-06-04 13:49:26 -0500 answered a question Partial 401 Unauthorized

So we dug and dug and the more we thought about it, it seemed strange that this error happened exactly after a year. It seemed like a cert issue and it was. See this blog post that gives the steps on how to fix the issue. Out of the box keystone certs expire after one year.

2014-06-04 12:54:32 -0500 commented question Partial 401 Unauthorized

I am having this same issue. Out of the blue nova and all other services 401. I have tried re-creating users, creating new users, debugging tokens (they all seem fine).

2014-04-25 13:19:46 -0500 commented question Resizing of an instance is not happeng and facing the Error:Cannot 'confirmResize' while instance is in vm_state active

Im having the same issue. I tried following everything here http://openstack.redhat.com/forum/dis... with no luck.

2014-01-23 18:42:23 -0500 marked best answer SSL Traffic from network node to VM

Im having an issue with SSL traffic from the network node to a VM. I am not sure where this is getting hung up. Im trying to download a file from github with wget but it locks up acknowleging a packet from GitHub. Bitbucket.org uses the same certificate type from Digicert and it works.

On the VM I'm running:

ubuntu@testnetwork:~$ openssl s_client -showcerts -connect github.com:443
CONNECTED(00000003)

Here is the tcpdump from the VM:

16:36:32.071382 IP 172.16.16.12.56210 > github.com.https: Flags [S], seq 3347835015, win 14600,   options [mss 1460,sackOK,TS val 254765 ecr 0,nop,wscale 6], length 0
16:36:32.155526 IP github.com.https > 172.16.16.12.56210: Flags [S.], seq 3139130773, ack 3347835016, win 5792, options [mss 1460,sackOK,TS val 751610489 ecr 254765,nop,wscale 10], length 0
16:36:32.155590 IP 172.16.16.12.56210 > github.com.https: Flags [.], ack 1, win 229, options [nop,nop,TS val 254786 ecr 751610489], length 0
16:36:32.156474 IP 172.16.16.12.56210 > github.com.https: Flags [P.], seq 1:227, ack 1, win 229, options [nop,nop,TS val 254786 ecr 751610489], length 226
16:36:32.204544 IP github.com.https > 172.16.16.12.56210: Flags [.], ack 227, win 7, options [nop,nop,TS val 751610505 ecr 254786], length 0
16:36:32.206941 IP github.com.https > 172.16.16.12.56210: Flags [P.], seq 2897:3691, ack 227, win 7, options [nop,nop,TS val 751610506 ecr 254786], length 794
16:36:32.207115 IP 172.16.16.12.56210 > github.com.https: Flags [.], ack 1, win 229, options [nop,nop,TS val 254799 ecr 751610505,nop,nop,sack 1 {2897:3691}], length 0
16:36:55.057510 IP 172.16.16.12.56208 > github.com.https: Flags [F.], seq 227, ack 1, win 229, options [nop,nop,TS val 260512 ecr 169352456,nop,nop,sack 1 {2897:3692}], length 0
16:37:32.225034 IP github.com.https > 172.16.16.12.56210: Flags [F.], seq 3691, ack 227, win 7, options [nop,nop,TS val 751625505 ecr 254799], length 0
16:37:32.225134 IP 172.16.16.12.56210 > github.com.https: Flags [.], ack 1, win 229, options [nop,nop,TS val 269803 ecr 751610505,nop,nop,sack 1 {2897:3692}], length 0

The TCPdump on the permiscious mode ethernet port on the network node shows:

11:36:32.042296 IP 172.16.1.104.56210 > 204.232.175.90.https: Flags [S], seq 3347835015, win 14600, options [mss 1460,sackOK,TS val 254765 ecr 0,nop,wscale 6], length 0
11:36:32.089359 IP 204.232.175.90.https > 172.16.1.104.56210: Flags [S.], seq 3139130773, ack 3347835016, win 5792, options [mss 1460,sackOK,TS val 751610489 ecr 254765,nop,wscale 10], length 0
11:36:32.107097 IP 172.16.1.104.56210 > 204.232 ...
(more)
2013-11-29 06:38:05 -0500 received badge  Famous Question (source)
2013-09-24 12:15:51 -0500 received badge  Nice Question (source)
2013-09-23 09:00:40 -0500 received badge  Famous Question (source)
2013-09-22 10:27:39 -0500 received badge  Notable Question (source)
2013-09-21 11:22:54 -0500 received badge  Popular Question (source)
2013-09-19 08:23:31 -0500 received badge  Student (source)
2013-09-18 10:34:52 -0500 asked a question What items to backup?

I want to backup openstack. Specifically I want to backup all the VMs I have created. We are starting to become dependent on it and I would hate to lose progress. How to you backup the VMs at a file system level and restore those backups? Is there an easy cinder/nova command that will copy the VMs to an external drive for you?

2013-09-18 10:30:31 -0500 commented answer VM internal network broken after compute restart.

Thanks, everything is working now.

2013-09-17 11:14:17 -0500 received badge  Notable Question (source)
2013-09-17 11:06:09 -0500 commented question VM internal network broken after compute restart.

@nishtha are you going this through the interface or CLI? What commands do you run?

2013-09-17 10:59:47 -0500 commented answer VM internal network broken after compute restart.

I don't think this was running at boot time. I added it to rc.local but it doesn't seem to have any effect.

2013-09-16 11:53:30 -0500 received badge  Famous Question (source)
2013-09-06 12:50:25 -0500 received badge  Popular Question (source)
2013-09-06 12:27:19 -0500 commented answer VM internal network broken after compute restart.

Im not sure if it is but I will double check this.

2013-09-04 11:28:50 -0500 answered a question VM internal network broken after compute restart.

Restarting all quantum services on the network box fixed the issue. Strange though that a reboot didn't.

2013-09-04 11:14:02 -0500 received badge  Editor (source)
2013-09-04 11:13:38 -0500 asked a question VM internal network broken after compute restart.

To preface -- I'm on Grizzly.

We had our compute node overheat and shutdown due to a cooling issue. The issue was resolved and everything came back online. The VMs were in a "Shutoff" state so I just ran "nova start uuid-of-machine" on all our machines and everything came online just fine, however now I cannot ping pre-existing vms within the internal network from a vm.

If I add a new VM then it works. I can ping it. I just can't access pre-existing VMs. I can access the VM from the floating IP though. It's like the existing VMs just dropped off the internal subnet although they have an IP associated with them. This is the first time this has happened.

What am I missing? Does something need to be flushed on the network node? If you need more information let me know.

2013-08-29 09:54:36 -0500 received badge  Notable Question (source)
2013-06-13 07:50:56 -0500 received badge  Popular Question (source)
2013-06-10 13:43:38 -0500 received badge  Scholar (source)
2013-06-10 13:26:40 -0500 commented answer SSL Traffic from network node to VM

I using Quantum with OVS, GRE -- the OVS section of ovs_quantum_plugin.ini has tenant_network_type = gre

2013-06-10 13:13:35 -0500 received badge  Supporter (source)
2013-06-10 13:10:28 -0500 commented answer SSL Traffic from network node to VM

BTW, i am using this config on the Network node: https://github.com/mseknibilel/OpenStack-Grizzly-Install-Guide/blob/OVS_MultiNode/OpenStack_Grizzly_Install_Guide.rst#35-quantum and the network and compute node are connected via a crosslink cable.

2013-06-10 13:05:52 -0500 commented answer SSL Traffic from network node to VM

You were right. It was an MTU issue. Once I ran your command I was able to wget the file from GitHub. Is there a more permenant way to resolve this?