Ask Your Question

spcla1's profile - activity

2015-07-06 08:54:46 -0500 received badge  Famous Question (source)
2015-03-30 11:04:22 -0500 received badge  Notable Question (source)
2015-01-22 07:57:42 -0500 received badge  Popular Question (source)
2015-01-19 16:33:23 -0500 commented answer Sahara floating ip not release after cluster deletion

Thanks for your quick response. but as far as a remember on the previous sahara version, when I automatically assigned the floating ip to the instance during cluster creation, they are automatically released when the cluster is successfully deleted (I might be wrong, but that's how i remember)

2015-01-19 14:41:34 -0500 asked a question Sahara floating ip not release after cluster deletion

I am running CentOS 7 and the Sahara version that comes with it (comes with the openstack juno). I noticed that every time I delete a cluster, all the floating ips assigned to the cluster's instances don't get released. I need to manually delete them.

Is there a setting to automatically release them as soon as the cluster is deleted?

Thanks for your help!

2014-09-16 22:54:13 -0500 received badge  Nice Answer (source)
2014-06-18 20:50:15 -0500 received badge  Commentator
2014-06-18 20:50:15 -0500 commented answer HPHelion installation failed ( Cannot update service status on host: overcloud)

@John: You mentioned that there might be problem with the version of virtualization and AMD hardware, I think the was my problem. Everything went well when I installed it with Intel system. So hopefully the issue with AMD will be resolved in the next release.

2014-05-22 15:36:41 -0500 commented answer HPHelion installation failed ( Cannot update service status on host: overcloud)

I posted a portion of the compute log file above. It's a little long so I didn't post the whole thing.

2014-05-22 11:49:43 -0500 commented answer HPHelion installation failed ( Cannot update service status on host: overcloud)

Hi John, I tried to attach the tar'ed log file, but ask.openstack.org only allows certain file extensions to be attached (.jpg, jpeg, .gif, .bmp, . png and .tiff). I will rename the file to one of the supported extensions but I don't know if that will work. Please let me know if there's a better way to do it.

Thanks!

2014-05-20 10:55:24 -0500 received badge  Famous Question (source)
2014-05-20 02:07:05 -0500 received badge  Favorite Question (source)
2014-05-19 14:24:09 -0500 received badge  Notable Question (source)
2014-05-19 00:01:00 -0500 received badge  Popular Question (source)
2014-05-17 23:11:14 -0500 asked a question HPHelion installation failed ( Cannot update service status on host: overcloud)

I tried to install the Helion community edition on Ubuntu 14.04 and I tried it twice already and I got exactly the same error. The undercloud seems to be installed successfully but the overcloud installation seems to be timing out. Here's where the installation stops:

Set gateway for router default-router
++ os-adduser -p 8c8b01aa7b8f72ae0a2fd58540e2e47e59a329fe demo demo@example.com
Created user demo with password '8c8b01aa7b8f72ae0a2fd58540e2e47e59a329fe'
++ nova flavor-delete m1.tiny
++ nova flavor-create m1.tiny 1 512 2 1
+----+---------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name    | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+---------+-----------+------+-----------+------+-------+-------------+-----------+
| 1  | m1.tiny | 512       | 2    | 0         |      | 1     | 1.0         | True      |
+----+---------+-----------+------+-----------+------+-------+-------------+-----------+
++ set +e
++ :
++ glance image-create --name user --public --disk-format qcow2 --container-format bare --file /root/tripleo/user.qcow2
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | ba56450f1d7e13e7bbf492e77a1bfdf1     |
| container_format | bare                                 |
| created_at       | 2014-05-18T01:01:47                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | c9d6b9ca-dd44-4c89-a75a-42e6fa5f186f |
| is_public        | True                                 |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | user                                 |
| owner            | e57991150b9540cdb4fcb91dc7d68ff4     |
| protected        | False                                |
| size             | 245010944                            |
| status           | active                               |
| updated_at       | 2014-05-18T01:03:02                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+
++ '[' 0 -eq 0 ']'
++ break
++ set -e
++ wait_for 30 10 nova service-list --binary nova-compute '2>/dev/null' '|' grep 'enabled.*\ up\ '
Timing out after 300 seconds:
COMMAND=nova service-list --binary nova-compute 2>/dev/null | grep enabled.*\ up\
OUTPUT=

Appreciate any help!

Just additional info: When I checked the host, it shows that 4 VMs are running:

# virsh list
 Id    Name                           State
----------------------------------------------------
 2     seed                           running
 3     baremetal_0                    running
 4     baremetal_1                    running
 6     baremetal_3                    running

I am using the HP bl465cG5 with AMD Opteron Processor 2354

Here's a portion of the compute node's log file: /var/log/upstart/nova-compute.log

libvirtError: internal error: CPU feature `misalignsse' specified more than once
2014-05-18 01:01:39.301 5496 ERROR nova.openstack.common.threadgroup [-] internal error: CPU feature `misalignsse' specified more than once
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125,          in wait
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup     x.wait()
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47,          in wait
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/eventlet/greenthread.py", line 168, in wait
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/eventlet/event.py", line 116, in wait
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-05-18 01:01:39.301 5496 TRACE nova.openstack.common.threadgroup   File "/opt/stack/venvs/nova/local/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 187, in switch
2014-05-18 ...
(more)
2014-04-29 07:15:33 -0500 received badge  Famous Question (source)
2014-04-06 10:42:15 -0500 received badge  Notable Question (source)
2014-03-31 01:07:58 -0500 received badge  Popular Question (source)
2014-03-26 20:24:49 -0500 commented answer Savanna/Sahara: How to assign a floating_ip_pool on node group?

Please disregard my comment above. The problem is related to the image. Has to modify some network related parameters so when the image is booted up, it can connect to the network.

2014-03-26 13:03:07 -0500 commented answer Savanna/Sahara: How to assign a floating_ip_pool on node group?

Thank you so much for your quick response. I can now see the floating ip pool option. All nodes now are getting the correct floating ip but I am still getting an error message. Any idea?

2014-03-26 11:01:24.690 23191 DEBUG savanna.service.instances [-] Can't login to node test1-worker1-003 (xx.xx.114.38), reason AuthenticationException: Authentication failed. _check_if_accessible /usr/lib/python2.6/site-packages/savanna/service/instances.py:375
2014-03-26 11:01:25.325 23191 DEBUG savanna.service.instances [-] Can't login to node test1-master1-001 (xx.xx.114.39), reason AuthenticationException: Authentication failed. _check_if_accessible /usr/lib/python2.6/site-packages/savanna/service/instances.py:375
2014-03-26 11:01:25.986 23191 DEBUG savanna.service.instances [-] Can't login to node test1-worker1-002 (xx.xx.114.40), reason AuthenticationException: Authentication failed. _check_if_accessible /usr/lib/python2.6/site-packages/savanna/service/instances.py:375
2014-03-26 11:01:26.626 23191 DEBUG savanna.service ...
(more)
2014-03-25 19:24:48 -0500 asked a question Savanna/Sahara: How to assign a floating_ip_pool on node group?

I am running Savanna 0.3 and I thought I have everything setup until I start a cluster. I am getting an error message:

Error: Node Group master is missing 'floating_ip_pool' field

The /etc/savanna/savanna.conf file has the following details but I am a little bit confused on how to set the floating_ip_pool for Node Groups.

If set to True, Savanna will use floating IPs to communicate with instances. To make sure that all instances have floating IPs assigned in Nova Network set "auto_assign_floating_ip=True" in nova.conf. If Neutron is # used for networking, make sure that all Node Groups have "floating_ip_pool" parameter defined. (boolean value)

I am currently running Openstack Havana with Neutron. Based on the info above I have to set the floating_ip_pool for all Node groups. I tried setting the floating_ip_pool to network name of the existing external network but didn't work so I tried to set it to the network id of the external, it didn't work either.

If I disable the use of floating ip (use_floating_ip=false), then i can start a cluster but it just stays in "waiting" state although all instances are up and running.

2014-03-25 15:52:02.914 21384 DEBUG savanna.service.instances [-] Can't login to node iptest-worker2-001 (172.10.1.4), reason error: [Errno 110] Connection timed out _check_if_accessible /
2014-03-25 15:53:06.408 21384 DEBUG savanna.service.instances [-] Can't login to node iptest-worker2-002 (172.10.1.5), reason error: [Errno 110] Connection timed out _check_if_accessible 
2014-03-25 15:53:06.948 21384 DEBUG savanna.service.instances [-] Can't login to node iptest-worker2-003 (172.10.1.6), reason error: [Errno 111] Connection refused _check_if_accessible 
2014-03-25 15:54:11.409 21384 DEBUG savanna.service.instances [-] Can't login to node iptest-master-001 (172.10.1.2), reason error: [Errno 110] Connection timed out _check_if_accessible

Here's my savanna config (/etc/savanna/savanna.conf)

use_floating_ips=True
floating_ip_pool=ext-net
use_neutron=True
use_namespaces=True

I also have SAVANNA_USE_NEUTRON set to True in /etc/openstack-dashboard/local_settings

Thanks in advance for any help!

2014-03-25 18:56:12 -0500 commented answer Why are nova and neutron services going down from time to time?

Thanks for your response. I have more than 10 nodes. I actually haven't seen the problem the last 3 weeks.

2014-03-14 07:05:41 -0500 received badge  Taxonomist
2014-03-12 15:35:07 -0500 received badge  Famous Question (source)
2014-03-08 02:24:51 -0500 received badge  Notable Question (source)
2014-03-07 12:29:29 -0500 received badge  Enthusiast
2014-03-07 09:56:23 -0500 received badge  Popular Question (source)
2014-03-06 20:03:16 -0500 asked a question Why are nova and neutron services going down from time to time?
OS: Redhat 6.5 
Openstack : Havana

I have seen this problem where nova-compute service goes down, although the node still has the nova-compute service running, the nova service-list command displays the state as down. Sometimes there are few nodes down but most of the time they all go down at the same time.

When this happens, the conductor log shows the following error message and after restarting openstack-nova-conductor, everything goes back to normal. Seems to be an issue with qpid.

conductor.log

2014-03-05 17:18:51.896 42263 ERROR root [-] Unexpected exception occurred 1 time(s)... retrying.
2014-03-05 17:18:51.896 42263 TRACE root Traceback (most recent call last):
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 78, in inner_func
2014-03-05 17:18:51.896 42263 TRACE root     return infunc(*args, **kwargs)
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py", line 709, in _consumer_thread
2014-03-05 17:18:51.896 42263 TRACE root     self.consume()
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py", line 700, in consume
2014-03-05 17:18:51.896 42263 TRACE root     it.next()
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py", line 617, in iterconsume
2014-03-05 17:18:51.896 42263 TRACE root     yield self.ensure(_error_callback, _consume)
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py", line 551, in ensure
2014-03-05 17:18:51.896 42263 TRACE root     return method(*args, **kwargs)
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/impl_qpid.py", line 608, in _consume
2014-03-05 17:18:51.896 42263 TRACE root     nxt_receiver = self.session.next_receiver(timeout=timeout)
2014-03-05 17:18:51.896 42263 TRACE root   File "<string>", line 6, in next_receiver
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 660, in next_receiver
2014-03-05 17:18:51.896 42263 TRACE root     if self._ecwait(lambda: self.incoming, timeout):
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 50, in _ecwait
2014-03-05 17:18:51.896 42263 TRACE root     result = self._ewait(lambda: self.closed or predicate(), timeout)
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 566, in _ewait
2014-03-05 17:18:51.896 42263 TRACE root     result = self.connection._ewait(lambda: self.error or predicate(), timeout)
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages/qpid/messaging/endpoints.py", line 209, in _ewait
2014-03-05 17:18:51.896 42263 TRACE root     self.check_error()
2014-03-05 17:18:51.896 42263 TRACE root   File "/usr/lib/python2.6/site-packages ...
(more)
2014-03-06 19:14:54 -0500 received badge  Famous Question (source)
2014-01-23 18:43:47 -0500 marked best answer Which provider:network_type to use when creating external network with GRE network?

Configuration:
Ubuntu 12.04 (1 controller node, 1 network node, 1 compute node) / GRE Tunneling / Per-tenant Routers with Private Networks

Got a little bit confused with the Openstack manual, when creating an external network using GRE with "Per-tenant Routers with Private Networks". I created it using "provider:network_type gre" but in the Openstack manual sample configuration, it was created using "provider:network_type local". I thought "provider:network_type local" is only use for single machine, does local also need to be used with " Per-tenant Routers with Private Networks"?

I also noticed that the external subnet was created with enable_dhcp=true. Doesn't it need to be false with external subnet?

Please help. Thanks!

2014-01-23 18:42:00 -0500 marked best answer Metadata server cannot be reached: which port is used by quantum service?

I have setup Openstack Grizzly on multiple machines, 1 controller node, 1 network node (running dhcp L3 agent and Openvswitch agent) and multiple compute nodes. Everything seems to be working ok. I can start a VM and can assign a floating ip. I can ping the VM using both the private and floating ip but can't ssh due to the ssh key not being loaded (metadata server can't be reached). Got the following error from the console:

2013-05-22 23:04:01,665 - util.py[WARNING]: 'http://169.254.169.254/2009-04-04/meta-data/instance-id' failed [9/120s]: http error [500]

I found this link https://bugs.launchpad.net/quantum/+bug/1160955 and I thought this was related to my problem but even after applying the patch, I was still having the same problem. One thing I noticed was I have 2 quantum-ns-metadata-proxy running and they are using 2 different metadata ports, here's the output of "ps -ef " grep quantum-ns" (I removed some info to make the output shorter). One of them is using port 80 and one is using 9697.

python /usr/bin/quantum-ns-metadata-proxy --pid_file=/var/lib/quantum/external/pids/3bb5723a-94d4-41d7-8daf-2139320c5ee9.pid --network_id=3bb5723a-94d4-41d7-8daf-2139320c5ee9 --state_path=/var/lib/quantum --metadata_port=80 

python /usr/bin/quantum-ns-metadata-proxy --pid_file=/var/lib/quantum/external/pids/5e6a567d-c9ae-432c-89ab-bd67eab5692f.pid --router_id=5e6a567d-c9ae-432c-89ab-bd67eab5692f --state_path=/var/lib/quantum --metadata_port=9697

The iptables command:

ip netns exec qrouter-5e6a567d-c9ae-432c-89ab-bd67eab5692f iptables -L -t nat

shows the following which is port 9697:

REDIRECT   tcp  --  anywhere             169.254.169.254      tcp dpt:http redir ports 9697

I have port 9697 in my l3_agent.ini which I believe is the default port.

Here's what I have on the metadata-agent.log and file /var/log/quantum/quantum-ns-metadata-proxy{{ router-ns-uuid }}.log is empty.

ERROR [quantum.agent.metadata.agent] Unexpected error. Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/quantum/agent/metadata/agent.py", line 88, in callreturn self.proxyrequest(instance_id, req)
File "/usr/lib/python2.7/dist-packages/quantum/agent/metadata/agent.py", line 137, in proxyrequest resp, content = h.request(url, headers=headers)
File "/usr/lib/python2.7/dist-packages/httplib2/init.py", line 1444, in request (response, content) = self.request(conn, authority, uri, requesturi, method, body, headers, redirections, cachekey)   
File "/usr/lib/python2.7/dist-packages/httplib2/init.py", line 1196, inrequest (response, content) = self.connrequest(conn, requesturi, method, body, headers)    
File "/usr/lib/python2.7/dist-packages/httplib2/init.py", line 1132, in connrequest conn.connect()    
File "/usr/lib/python2.7/dist-packages/httplib2/init.py", line 798, in connect raise socket.error, msg error: [Errno 111] ECONNREFUSED

ip netns exec {{ dhcp-ns-uuid}} netstat -an shows the following:

Destination Gateway Genmask Flags MSS Window irtt Iface    
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 ns-468d9253-00    
172.20.1.0 0.0.0.0 255.255.255.0 U 0 0 0 ns-468d9253-00

I have the metadata-agent.ini configured with all the information you mentioned above and also have this entry on the nova.conf file:

enabledapis=ec2,osapicompute ...
(more)