Ask Your Question

Jet's profile - activity

2017-04-07 02:39:46 -0500 received badge  Famous Question (source)
2016-07-01 23:12:05 -0500 received badge  Taxonomist
2016-05-18 09:49:35 -0500 received badge  Famous Question (source)
2016-05-18 09:49:35 -0500 received badge  Notable Question (source)
2016-01-18 16:45:33 -0500 received badge  Popular Question (source)
2016-01-15 10:25:00 -0500 received badge  Civic Duty (source)
2016-01-15 09:32:40 -0500 asked a question How do you find what is using a security group of a deleted tenant?

I have a security group hanging around from a tenant that was deleted and I'm not able to delete it because it says its in use. i.e.

# neutron security-group-show d6bd5785-1be0-44e4-801d-6aa72d305d22
+----------------------+--------------------------------------+
| Field                | Value                                |
+----------------------+--------------------------------------+
| description          | default                              |
| id                   | d6bd5785-1be0-44e4-801d-6aa72d305d22 |
| name                 | default                              |
| security_group_rules |                                      |
| tenant_id            | 6267b8d549e94bd1806a32f8186f224d     |
+----------------------+--------------------------------------+

# neutron security-group-delete d6bd5785-1be0-44e4-801d-6aa72d305d22
Conflict (HTTP 409) (Request-ID: req-eaa6b9d5-e77e-42d8-a71f-d760640daea9)

# grep req-eaa6b9d5-e77e-42d8-a71f-d760640daea9 /var/log/neutron/server.log
2016-01-15 15:15:24.234 52761 INFO neutron.api.v2.resource [req-eaa6b9d5-e77e-42d8-a71f-d760640daea9 None] delete failed (client error): Security Group d6bd5785-1be0-44e4-801d-6aa72d305d22 in use.
2016-01-15 15:15:24.238 52761 INFO neutron.wsgi [req-eaa6b9d5-e77e-42d8-a71f-d760640daea9 None] 192.168.200.2 - - [15/Jan/2016 15:15:24] "DELETE /v2.0/security-groups/d6bd5785-1be0-44e4-801d-6aa72d305d22.json HTTP/1.1" 409 351 0.419005

I've looped through the ports, networks, and instances looking for that tenant_id trying to find what could be using it and I don't see anything with that tenant_id.

So how do I find what is using it to be able to delete this security group?

2015-11-12 09:49:03 -0500 asked a question [Juno] Is there any audit logs/info to trace events back to the tenant/user that initiated them?

Is there any audit logs/info that can be used to show which user in which tenant/project initiated the event? And we are looking for this for all the services Nova, Neutron, Heat, and etc.

2015-11-12 05:00:53 -0500 received badge  Famous Question (source)
2015-08-30 09:20:16 -0500 received badge  Popular Question (source)
2015-08-30 09:20:16 -0500 received badge  Notable Question (source)
2015-08-23 08:41:11 -0500 received badge  Nice Question (source)
2015-08-05 19:41:48 -0500 received badge  Famous Question (source)
2015-07-21 12:59:46 -0500 received badge  Necromancer (source)
2015-07-15 14:12:37 -0500 received badge  Nice Question (source)
2015-07-08 09:24:00 -0500 answered a question What is the MYSQL default credential for Mirantis openstack?

You can find the password in

 /etc/mysql/conf.d/password.cnf

on any of the controllers

2015-07-08 08:33:08 -0500 answered a question Heat Stack Stuck in "Delete in Progress"

Had to use answer for more space. I have this same issue with stack stuck in DELETE_IN_PROGRESS however mine is a little differnt in that I'm not able to list the resources and some of the events show CREATE_IN_PROGRESS. This stack has been like this for weeks so I don't have the original log of the delete and executing stack-delete again only shows

<134>Jul  8 09:30:15 node-2 heat-engine 2015-07-08 09:30:15.510 24009 INFO heat.engine.service [req-e31fe15d-0076-4ec8-a88a-965e7d57f584 None] Deleting stack OSHeatAsgMultiple
<134>Jul  8 09:30:15 node-2 heat-engine 2015-07-08 09:30:15.536 24009 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (OSHeatAsgMultiple): Stack DELETE started

in the logs. Here is what I'm seeing

$ heat stack-list; heat stack-show OSHeatAsgMultiple; heat resource-list OSHeatAsgMultiple; heat event-list OSHeatAsgMultiple
+--------------------------------------+----------------------------+--------------------+----------------------+
| id                                   | stack_name                 | stack_status       | creation_time        |
+--------------------------------------+----------------------------+--------------------+----------------------+
| 13c54853-d2cd-41f9-815c-28fee2a3880e | OSHeatAsgMultiple          | DELETE_IN_PROGRESS | 2015-06-23T23:09:50Z |
+--------------------------------------+----------------------------+--------------------+----------------------+
+----------------------+-----------------------------------------------------------------------------------------------------------------------------------+
| Property             | Value                                                                                                                             |
+----------------------+-----------------------------------------------------------------------------------------------------------------------------------+
| capabilities         | []                                                                                                                                |
| creation_time        | 2015-06-23T23:09:50Z                                                                                                              |
| description          | OSTest--template--20150608                                                                                                        |
| disable_rollback     | True                                                                                                                              |
| id                   | 13c54853-d2cd-41f9-815c-28fee2a3880e                                                                                              |
| links                | http://ip.ad.d.ress:8004/v1/0ccffcafddc64202be2a83dfa04456f5/stacks/OSHeatAsgMultiple/13c54853-d2cd-41f9-815c-28fee2a3880e (self) |
| notification_topics  | []                                                                                                                                |
| parameters           | {                                                                                                                                 |
|                      |   "OS::stack_id": "13c54853-d2cd-41f9-815c-28fee2a3880e",                                                                         |
|                      |   "OS::stack_name": "OSHeatAsgMultiple"                                                                                           |
|                      | }                                                                                                                                 |
| parent               | None                                                                                                                              |
| stack_name           | OSHeatAsgMultiple                                                                                                                 |
| stack_owner          | USER                                                                                                                            |
| stack_status         | DELETE_IN_PROGRESS                                                                                                                |
| stack_status_reason  | Stack DELETE started                                                                                                              |
| template_description | OSTest--template--20150608                                                                                                        |
| timeout_mins         | None                                                                                                                              |
| updated_time         | None                                                                                                                              |
+----------------------+-----------------------------------------------------------------------------------------------------------------------------------+
ERROR: The specified reference "15973b917c96459b9c9de435de1e2c2e" (in cpu_alarm_high.Properties.alarm_actions[0]) is incorrect.
+--------------------------------------+--------------------------------------+------------------------+--------------------+----------------------+
| resource_name                        | id                                   | resource_status_reason | resource_status    | event_time           |
+--------------------------------------+--------------------------------------+------------------------+--------------------+----------------------+
| scale_up_policy                      | 6e3b7d51-c1d2-41f4-887e-be714961f9f1 | state changed          | CREATE_COMPLETE    | 2015-06-23T23:10:53Z |
| scale_down_policy                    | 7708a499-b237-40d6-8c19-ed7b8b4a0b77 | state changed          | CREATE_COMPLETE    | 2015-06-23T23:10:53Z |
| scale_up_policy                      | e283a6b9-852a-478b-a884-39fc55a47644 | state changed          | CREATE_IN_PROGRESS | 2015-06-23T23:10:52Z |
| scale_down_policy                    | 92e28452-a07a-4ab1-8b7a-81cdea7ab0fd | state changed          | CREATE_IN_PROGRESS | 2015-06-23T23:10:51Z |
| OS_ASG_Test_1                        | 4e30838e-8202-4565-a31e-b71ea5edb5cd | state changed          | CREATE_COMPLETE    | 2015-06-23T23:10:51Z |
| OS_ASG_Test_1                        | 7b483680-8c9a-4890-8fad-a0902c1c79ad | state changed          | CREATE_IN_PROGRESS | 2015-06-23T23:10:08Z |
| 2d45d8db-89b0-49e3-82f3-fe14fe10b322 | 132ab8c2-f5b7-47d5-8564-51660150d3bb | state changed          | CREATE_COMPLETE    | 2015-06-23T23:10:08Z |
| 2d45d8db-89b0-49e3-82f3-fe14fe10b322 | 21cd8b88-9f30-4346-a91e-4a970ad25dcb | state changed          | CREATE_IN_PROGRESS | 2015-06-23T23:09:50Z |
+--------------------------------------+--------------------------------------+------------------------+--------------------+----------------------+
2015-07-07 15:37:24 -0500 commented answer [MOS Juno] Heat stack delete Fails with 'Forbidden: You are not authorized to perform the requested action: identity:delete_user (HTTP 403)'

The work around in the bug report to be an admin in the project allowed me to deleted the stacks. Thanks!

2015-07-06 02:17:45 -0500 received badge  Famous Question (source)
2015-07-05 18:28:19 -0500 received badge  Notable Question (source)
2015-06-18 12:51:53 -0500 commented question Should an instance based on a deleted glance image stop working?

well I was able to verify that this must just be an issue with this 1 image. I was not able to reproduce the issue with a new image or the bug that you linked to. I created an instance from and image and then deleted that image. I was able to 'soft reboot', 'hard reboot', and shutdown and startup

2015-06-17 14:40:23 -0500 commented question Heat/Yaml: get_param not working for lists in for_each construct

In the example in the documenation http://docs.openstack.org/developer/heat/template_guide/hot_spec.html?highlight=for_each#repeat there is %'s on both sides of the variable i.e.

 %port%: { get_param: ports }
 template:
 port_range_min: %port%
 port_range_max: %port%

could that be it?

2015-06-15 10:11:07 -0500 asked a question Should an instance based on a deleted glance image stop working?

I have an instance where the glance image it was created from has been deleted. Rebooting this instance caused it to not come up. Is this the expected behavior? Will instances only work if the image they were created from is in glance? If so this is surprising to me in that I thought the instances would still work even if the glance image they were based off of was deleted. Please advise. Thanks.

Here is what I saw at the end of the boot up log I see:

...
end_request: I/O error, dev vda, sector 1955384
end_request: I/O error, dev vda, sector 1955384
end_request: I/O error, dev vda, sector 438216
...

and on the compute node I see:

<179>Jun 15 15:00:43 node-3 nova-compute Exception during message handling: Image 0d350344-2692-49c9-aeb1-6a2083879c32 could not be found.
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher Traceback (most recent call last):
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 137, in _dispatch_and_reply
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher     incoming.message))
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 180, in _dispatch
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher     return self._do_dispatch(endpoint, method, ctxt, args)
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 126, in _do_dispatch
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher     result = getattr(endpoint, method)(ctxt, **new_args)
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 88, in wrapped
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher     payload)
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/exception.py", line 71, in wrapped
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher     return f(self, context, *args, **kw)
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 298, in decorated_function
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher     pass
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2015-06-15 11:00:43.885 8474 TRACE oslo.messaging.rpc.dispatcher   File "/usr/lib/python2.6 ...
(more)
2015-06-12 10:50:45 -0500 received badge  Popular Question (source)
2015-06-11 08:50:52 -0500 asked a question [MOS Juno] Heat stack delete Fails with 'Forbidden: You are not authorized to perform the requested action: identity:delete_user (HTTP 403)'

We using MOS (Mirantis OpenStack) Juno and have followed http://hardysteven.blogspot.co.uk/201... and setup heat to use trusts i.e.

deferred_auth_method=trusts
trusts_delegated_roles=heat_stack_owner

and followed http://hardysteven.blogspot.com/2014/... to setup a seperate domain for the creation of heat users.

However there are a few stacks that we are not able to delete. When we try to delete it we see the below in the logs

<134>Jun 11 09:35:58 node-2 heat-engine 2015-06-11 09:35:58.631 15121 INFO heat.engine.service [req-ecabf662-4b54-4caf-8bdd-f9bce7b68b7d None] Deleting stack testuser-v000
<134>Jun 11 09:35:58 node-2 heat-engine 2015-06-11 09:35:58.656 15121 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (testuser-v000): Stack DELETE started
<134>Jun 11 09:35:58 node-2 heat-engine 2015-06-11 09:35:58.685 15121 INFO heat.engine.environment [-] Registering OS::Heat::ScaledResource -> AWS::EC2::Instance
<134>Jun 11 09:35:58 node-2 heat-engine 2015-06-11 09:35:58.809 15121 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (testuser-v000*): Stack DELETE started
<134>Jun 11 09:35:59 node-2 heat-engine 2015-06-11 09:35:59.842 15121 INFO heat.engine.resource [-] deleting AWSScalingPolicy "WebServerScaleUpPolicy" [b1ea27e941ac4885b928681d68e6540a] Stack "testuser-v000*" [cc790a48-9374-4cdc-a79e-8f4fc52a9113]
<134>Jun 11 09:35:59 node-2 heat-engine 2015-06-11 09:35:59.878 15121 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.200.2
<134>Jun 11 09:36:00 node-2 heat-engine 2015-06-11 09:36:00.042 15121 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.200.2
<132>Jun 11 09:36:00 node-2 heat-engine 2015-06-11 09:36:00.070 15121 WARNING heat.engine.stack_user [-] Reverting to legacy user delete path
<134>Jun 11 09:36:00 node-2 heat-engine 2015-06-11 09:36:00.075 15121 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.200.2
<134>Jun 11 09:36:00 node-2 heat-engine 2015-06-11 09:36:00.102 15121 INFO heat.engine.resource [-] DELETE: AWSScalingPolicy "WebServerScaleUpPolicy" [b1ea27e941ac4885b928681d68e6540a] Stack "testuser-v000*" [cc790a48-9374-4cdc-a79e-8f4fc52a9113]
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource Traceback (most recent call last):
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource   File "/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 435, in _action_recorder
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource     yield
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource   File "/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 839, in delete
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource     yield self.action_handler_task(action, *action_args)
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource   File "/usr/lib/python2.6/site-packages/heat/engine/scheduler.py", line 286, in wrapper
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource     step = next(subtask)
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource   File "/usr/lib/python2.6/site-packages/heat/engine/resource.py", line 476, in action_handler_task
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource     handler_data = handler(*args)
2015-06-11 09:36:00.102 15121 TRACE heat.engine.resource   File "/usr/lib/python2.6/site-packages ...
(more)
2015-06-10 01:14:38 -0500 received badge  Famous Question (source)
2015-05-30 10:57:26 -0500 received badge  Popular Question (source)
2015-05-30 10:57:26 -0500 received badge  Notable Question (source)
2015-05-28 15:01:04 -0500 asked a question [Murano] How to use Murano DynamicUI forms to generate Heat templates?

Is there a way to create a Murano application with a Dynamic form to generate a heat template? So that the Murano application is a layer on top of heat allowing me not to have to hard code values into my heat template.

For example lets say that I want to create a load balancer with some servers behind it. It would be great if the application could define a dynamicUI form that would pull the information needed from OpenStack so the user could select it. Like drop downs to select the subnet for the LB pool, drop down to select the external network for the LB floating IP, image flavor drop down, image drop down, and etc. Is that possible? Are there any examples or documentation to do that?

2015-05-20 10:22:53 -0500 received badge  Notable Question (source)
2015-05-20 09:20:10 -0500 received badge  Popular Question (source)
2015-05-18 15:24:15 -0500 asked a question How to fix Ceilometer SSLError: [Errno 1] _ssl.c:492: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

I have a test instance of Mirantis OpenStack setup where I have all the services listening only on the managment network and HAproxy is setup with HTTPS for the public endpoints. So my endpoint list looks like

+---------------------------------------------------+--------------------------------------------------+-----------------------------------------------+
|                    publicurl                      |                  internalurl                     |                  adminurl                     |
+---------------------------------------------------+--------------------------------------------------+-----------------------------------------------+
|    https://PBC.EXT.IP.ADR:8773/services/Cloud     |     http://MGM.INT.IP.ADR:8773/services/Cloud    |   http://MGM.INT.IP.ADR:8773/services/Admin   |
|            https://PBC.EXT.IP.ADR:8777            |            http://MGM.INT.IP.ADR:8777            |          http://MGM.INT.IP.ADR:8777           |
|           https://PBC.EXT.IP.ADR:9696/            |            http://MGM.INT.IP.ADR:9696/           |          http://MGM.INT.IP.ADR:9696/          |
|          https://PBC.EXT.IP.ADR:8000/v1/          |          http://MGM.INT.IP.ADR:8000/v1/          |        http://MGM.INT.IP.ADR:8000/v1/         |
|  https://PBC.EXT.IP.ADR:8386/v1.1/%(tenant_id)s   |   http://MGM.INT.IP.ADR:8386/v1.1/%(tenant_id)s  | http://MGM.INT.IP.ADR:8386/v1.1/%(tenant_id)s |
|   https://PBC.EXT.IP.ADR:8082/v1/%(tenant_id)s    |    http://MGM.INT.IP.ADR:8082/v1/%(tenant_id)s   |  http://MGM.INT.IP.ADR:8082/v1/%(tenant_id)s  |
|   https://PBC.EXT.IP.ADR:8774/v2/%(tenant_id)s    |    http://MGM.INT.IP.ADR:8774/v2/%(tenant_id)s   |  http://MGM.INT.IP.ADR:8774/v2/%(tenant_id)s  |
|            https://PBC.EXT.IP.ADR:9292            |            http://MGM.INT.IP.ADR:9292            |          http://MGM.INT.IP.ADR:9292           |
|   https://PBC.EXT.IP.ADR:8776/v2/%(tenant_id)s    |    http://MGM.INT.IP.ADR:8776/v2/%(tenant_id)s   |  http://MGM.INT.IP.ADR:8776/v2/%(tenant_id)s  |
|   https://PBC.EXT.IP.ADR:8776/v1/%(tenant_id)s    |    http://MGM.INT.IP.ADR:8776/v1/%(tenant_id)s   |  http://MGM.INT.IP.ADR:8776/v1/%(tenant_id)s  |
|         https://PBC.EXT.IP.ADR:5000/v2.0          |          http://MGM.INT.IP.ADR:5000/v2.0         |       http://MGM.INT.IP.ADR:35357/v2.0        |
|   https://PBC.EXT.IP.ADR:8004/v1/%(tenant_id)s    |    http://MGM.INT.IP.ADR:8004/v1/%(tenant_id)s   |  http://MGM.INT.IP.ADR:8004/v1/%(tenant_id)s  |
| https://PBC.EXT.IP.ADR:8080/v1/AUTH_%(tenant_id)s | http://MGM.INT.IP.ADR:8080/v1/AUTH_%(tenant_id)s |          http://MGM.INT.IP.ADR:8080/          |
|            https://PBC.EXT.IP.ADR:8080            |            http://MGM.INT.IP.ADR:8080            |          http://MGM.INT.IP.ADR:8080           |
+---------------------------------------------------+--------------------------------------------------+-----------------------------------------------+

Now I'm using a self signed certificate and it looks like ceilometer doesn't have a way to tell it not to verify the certificate. Here is what I see in the logs (I'm guessing it is trying to use the public URL to talk to nova)

<131>May 18 20:16:04 node-51 ceilometer-agent-central Unable to discover resources: [Errno 1] _ssl.c:492: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
2015-05-18 20:16:04.889 31763 TRACE ceilometer.agent Traceback (most recent call last):
2015-05-18 20:16:04.889 31763 TRACE ceilometer.agent   File "/usr/lib/python2.6/site-packages/ceilometer/agent.py", line 233, in discover
2015-05-18 20:16:04.889 31763 TRACE ceilometer.agent     discovered = discoverer.discover(self, param)
2015-05-18 20:16:04 ...
(more)
2015-05-18 06:56:00 -0500 marked best answer Heat stack delete fails but all the resources have been deleted

I have a heat template that I've used to create stacks of resources and delete several times. However it looks like one time when it was deleting my horizon session expired and the delete failed. Now in Horizon or via the command line I'm not able to delete this stack. When I try I see the below error in the logs

2015-03-13 17:48:09.372 5920 INFO heat.engine.service [req-8f555b3f-31ad-460e-b964-ed03de3d714d None] Deleting stack jetspviptest1
2015-03-13 17:48:09.407 5920 INFO heat.engine.stack [-] Stack DELETE IN_PROGRESS (jetspviptest1): Stack DELETE started
2015-03-13 17:48:14.529 5920 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.200.2
2015-03-13 17:48:14.705 5920 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 192.168.200.2
2015-03-13 17:48:14.730 5920 ERROR heat.engine.stack [-] Could not find project: 27c1023fb7ad4b3bbe663c85e6594574 (HTTP 404)
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack Traceback (most recent call last):
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack   File "/usr/lib/python2.6/site-packages/heat/engine/stack.py", line 942, in delete
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack     project_id=self.stack_user_project_id)
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack   File "/usr/lib/python2.6/site-packages/heat/common/heat_keystoneclient.py", line 506, in delete_stack_domain_project
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack     project = self.domain_admin_client.projects.get(project=project_id)
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack   File "/usr/lib/python2.6/site-packages/keystoneclient/v3/projects.py", line 83, in get
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack     project_id=base.getid(project))
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack   File "/usr/lib/python2.6/site-packages/keystoneclient/base.py", line 71, in func
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack     return f(*args, **new_kwargs)
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack   File "/usr/lib/python2.6/site-packages/keystoneclient/base.py", line 333, in get
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack     self.key)
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack   File "/usr/lib/python2.6/site-packages/keystoneclient/base.py", line 135, in _get
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack     resp, body = self.client.get(url, **kwargs)
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack   File "/usr/lib/python2.6/site-packages/keystoneclient/httpclient.py", line 629, in get
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack     return self._cs_request(url, 'GET', **kwargs)
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack   File "/usr/lib/python2.6/site-packages/keystoneclient/httpclient.py", line 621, in _cs_request
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack     return self.request(url, method, **kwargs)
2015-03-13 17:48:14.730 5920 TRACE heat.engine.stack   File "/usr/lib/python2.6/site-packages/keystoneclient/httpclient.py", line 596, in request
2015-03-13 17:48:14.730 5920 TRACE heat ...
(more)
2015-05-13 14:30:30 -0500 received badge  Necromancer (source)
2015-05-13 14:25:54 -0500 answered a question How to change cpu_allocation_ratio

Thanks to Raul at Mirantis for this explanation of what is happening: The Horizon dashboard is presenting the total that is physically available, and now that you have modified the over-subscription ratio the current or in-use count will increase as you deploy more instances and surpass the available number. The reason for this is because the nova scheduler is the component that uses the cpu_allocation_ratio to schedule instances and ultimately oversubscribe a host. The figure you are seeing in Horizon is retrieved using the nova API which lists the physically available cores.

2015-04-23 01:12:36 -0500 received badge  Famous Question (source)
2015-04-15 15:17:22 -0500 commented answer How to clean up old ceilometer data ?

Do these instructions change if you have a 3 node mongoDB setup?

2015-04-14 16:49:50 -0500 commented question heat scaling request returns 500

I had this same issue and for me it turned out that the message it wasn't getting was from ceilometer and the reason it wasn't getting it is because mongoDB filled up the disk. Once the mongoDB had space and ceilomter was restarted then it started working.

2015-04-01 17:42:31 -0500 received badge  Famous Question (source)