Ask Your Question

DanielK's profile - activity

2018-06-26 11:38:45 -0600 received badge  Famous Question (source)
2017-04-08 04:22:16 -0600 received badge  Notable Question (source)
2017-03-02 13:25:37 -0600 received badge  Famous Question (source)
2017-02-02 09:31:59 -0600 received badge  Popular Question (source)
2017-01-06 04:11:13 -0600 received badge  Teacher (source)
2017-01-06 03:10:45 -0600 received badge  Editor (source)
2017-01-06 03:10:12 -0600 answered a question How to check if keystone is running

Assuming you have keystone configured to run as wsgi plugged in to Apache (which is the standard these days) you can just do systemctl status httpd. You will see a list of httpd threads listed, among them will be the ones called wsgi:keystone- -DFOREGROUND for further information you can always look into /var/log/httpd/keystone-error.log and /var/log/keystone/keystone.log

2016-12-02 08:42:20 -0600 commented question Trying to get heat with keystone API v3 and trusts to work

Is there some package that I am missing which contains that driver?

BTW, this is openstack liberty and keystone 1:8.1.2-1.el7 on Centos 7

2016-12-02 08:40:55 -0600 commented question Trying to get heat with keystone API v3 and trusts to work

I don't see much else than during the debug output of the deployment itself, except this message which adds further information:

016-12-02 14:31:11.333 8354 DEBUG keystone.common.manager [-] Failed to load 'keystone.trust.backends.sql.Trust' using stevedore: No 'keystone.trust' driver found

2016-12-02 05:32:24 -0600 commented answer nova service-list all down

Check whether your message queue is running properly on the host which you intended as your messaging agent. If you use rabbitmq you can do that by entering 'rabbitmqctl status' and also have a loog at rabbitmqctl list_users to see whether you have your rabbit user there.

2016-11-30 06:42:07 -0600 answered a question nova service-list all down

As far as the endpoint list is concerned, this is using the new openstack client

openstack endpoint list

As for the nova issues, is there anything meaning ful you can see in the /var/log/nova* logs on the controller and the hypervisors? If it is a problem with the message queue you should see that there. Also check whether your hypervisors didn;t for some reason get lost from your host aggregates (openstack aggregate list & openstack aggregate show or Host Aggregates under Admin -> System in Horizon))

2016-11-30 06:23:18 -0600 asked a question Trying to get heat with keystone API v3 and trusts to work

Hello

As the title says I am trying to get this to work but I am unable to. I have migrated my test setup to keystone API V3 and all services are OK, but the heat service doesn't work as intended.

I have set my stack_user_domain_id and stack_domain_admin. I also have deferred_auth_method = trusts trusts_delegated_roles = heat_stack_owner set. I have a separate domain called heat in which my stack_domain_admin has the admin and heat-stack_owner role

openstack user list --domain heat

+----------------------------------+-------------+
| ID                               | Name        |
+----------------------------------+-------------+
| 415db3f35e8445b085676c6eb73e94eb | stack_admin |
+----------------------------------+-------------+

openstack role list --domain heat

+----------------------------------+------------------+
| ID                               | Name             |
+----------------------------------+------------------+
| 1effcb0a91d0408a9b71098ac3bb98c7 | project_admin    |
| 3424d73431f84d6090a934854c596e96 | heat_stack_user  |
| 80db8ad3599d4603b16bd80983b90cda | heat_stack_owner |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_         |
| a7599f32de514ddab3d93c8380e4aec6 | admin            |
| e1143e6bb70344d68c115d4652462014 | image_admin      |
+----------------------------------+------------------+

openstack role assignment list -c Role -c User --domain heat

+----------------------------------+----------------------------------+
| Role                             | User                             |
+----------------------------------+----------------------------------+
| 80db8ad3599d4603b16bd80983b90cda | 415db3f35e8445b085676c6eb73e94eb |
| a7599f32de514ddab3d93c8380e4aec6 | 415db3f35e8445b085676c6eb73e94eb |
| a7599f32de514ddab3d93c8380e4aec6 | admin                            |
+----------------------------------+----------------------------------+

My own user is also heat-stack_owner in the project I am member of in my default domain.

However, if I try to deploy a heat stack I am getting an error ERROR: Remote error: BadRequest Expecting to find id or name in user (full debug output here: http://pastebin.com/N0R0c29i) This looks to me like the trusts are not working. Especially since I can deploy this stack if I switch to password as deferred_auth_method.

Can anyone shed some light on what I might be missing there?

2016-08-08 08:40:27 -0600 commented question Persistent increase of rabbit queues

As a matter of fact, I have seen today that the queues begin to affect the rabbitmq management dashboard itself. At almost 20K active queues it began to complain about too many database entries to process and warned about increasing memory load when doing so.

2016-08-05 04:29:51 -0600 received badge  Notable Question (source)
2016-08-03 07:55:51 -0600 received badge  Popular Question (source)
2016-08-02 07:49:59 -0600 asked a question Persistent increase of rabbit queues

Hello

We are running openstack liberty on CentOS 7 with RabbitMQ 3.6.2 and Erlang 18.3.3. Over time we can observe a persistent increase of rabbit queues that never go away. We have durable queues set to false and auto_delete to true, and all the queues in question have a consumer attached, but they often remain idle for days after their initial use for data exchange once they get created.

Restarting individual openstack components like neutron, nova, cinder or heat will get rid of queues used by those services, and restarting rabbitmq itself starts the queue numbers from zero. They then increase rapidly over a short time, and then the amount of queues goes into a slow but continuous path upwards.

I am worried that this can eventually cause resource problems with the rabbitmq server, and I was wondering whether this is working as intended or whether there is a possibility to influence this behaviour.

Is tere a way to do garbage collection on idle queues which have a consumer attached? All I found so far in the rabbitmq documentation is only about queues with no consumer.

Thank you in advance.

2016-07-17 08:12:06 -0600 received badge  Popular Question (source)
2016-07-10 11:19:44 -0600 received badge  Popular Question (source)
2016-07-05 07:30:47 -0600 asked a question Specifying rescue image only works once

Hello

I was trying to rescue a VM with a specific rescue image by using nova rescue. I could specify a rescue image once, but that was not working out for me so I wanted to use a different one. However after setting the rescue image one time it appears I can not change it a second time.

Is there a way how i could do that in the database? I was looking but could not find where I can set the image uuid in the db.

This is on openstack liberty BTW.

Thanks in advance

Edit: I have also looked through a dump of the complete openstack DB and did not find anything where the rescue image is set.

2016-05-04 10:34:14 -0600 asked a question puppet-keystone not adding user roles in liberty

Hello

I hope I am asking this question at the right place.

I am trying to deploy a liberty openstack from scratch, and it appears that puppet-keystone does not add the user roles. I am using the 7.0.0 version of the puppet modules. When I do openstack role list --project admin --user admin or openstack role list --project service --user neutron I get an empty response and of course puppet fails.

Once I add the roles manually with for example openstack role add --project admin --user admin admin etc. it works fine. I have set my puppet manifests explicitly to configure_user_role=true (even though that is the default) but it appears to not help.

The only indication I have found is, that in the debug output of puppet I can see things like this: project list --quiet --format csv --user admin --long'

This fails because it assumes v3 keystone api (to make use of the --user parameter) but it appears that without any further information puppet falls back to the v2 api. I did set the version in the puppet manifest to v3.

Is there something I am missing?

2015-11-10 05:08:34 -0600 received badge  Popular Question (source)
2015-11-10 05:08:34 -0600 received badge  Famous Question (source)
2015-11-10 05:08:34 -0600 received badge  Notable Question (source)
2015-10-29 07:40:41 -0600 received badge  Popular Question (source)
2015-09-23 03:26:06 -0600 asked a question identity_uri in nova.conf

Hello

I recently puppetized my OpenStack Kilo and in doing so I realised, that the identity_uri configuration directive in nova.conf does not work as intended. The configuration reference for nova.conf does mention it as the correct directive for the internal authentication api endpoint, but if I enter it (and replace auth_url with it) then I get an authentication error from the nova api: ERROR (Unauthorized): Unauthorized (HTTP 401)

Is that a known issue?

Kind regards

2015-08-31 03:43:54 -0600 commented answer neutron-rootwrap appears unable to kill dnsmasq processes

Thanks, good to hear. I reckon since you already committed a fix it will soon be available as update and the problem will not reappear after that.

2015-08-20 02:18:02 -0600 commented question neutron-rootwrap appears unable to kill dnsmasq processes

Yes, that did indeed happen.

Can that be the source of the problem? If so, would you know why that causes the issue?

Thanks

2015-08-20 02:16:07 -0600 received badge  Famous Question (source)
2015-08-14 15:57:38 -0600 received badge  Notable Question (source)
2015-08-14 15:57:38 -0600 received badge  Popular Question (source)
2015-08-13 03:52:15 -0600 received badge  Enthusiast
2015-08-11 06:39:22 -0600 received badge  Student (source)
2015-08-11 06:12:17 -0600 asked a question neutron-rootwrap appears unable to kill dnsmasq processes

Hello everyone

I have run into a problem with openstack kilo on CentOS 7 where a specific hypervisor's instances did not get addresses assigned from the network controller anymore. Investigation lead me to the dhcp-agent logs which stated that the agent was "Unable to enable dhcp" for that machine. This was due to an error thrown by the neutron-rootwrap

/usr/bin/neutron-rootwrap: Unauthorized command: kill -9 14137 (no filter matched)

14137 is the pid of the dnsmasq process in this case.

I checked my dhcp.filter in /etc/neutron/rootwrap.d and made sure it matches with the newest one at https://github.com/openstack/neutron/blob/master/etc/neutron/rootwrap.d/dhcp.filters (https://github.com/openstack/neutron/...)

I also checked the rootwrapper code and as far as I could see it there is indeed no way the KillFilter which neutron-rootwrap is using can take a pid as argument, so it would make sense for that command to fail.

Is this a bug or did I miss something?

BTW, once I killed the dnsmasq process by hand everything worked fine again but I am afraid that problem could re-appear in the future.

Kind regards

2015-06-11 13:31:29 -0600 asked a question Instance failed to spawn - Kilo/CentOS 7

Hello

After some previous success with Juno, I have built a new openstack environment with Kilo and CentOS7, everything fine so far, however, when I try to start a test VM, it almost immediately ends up in the ERROR state. If I look through my nova-compute log I see the following:

TRACE nova.compute.manager [instance: dbef4afc-0f4a-4725-8106-db0b6cca69e3] libvirtError: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu instance-00000008 prepare begin -) unexpected exit status 1: Traceback (most recent call last): TRACE nova.compute.manager [instance: dbef4afc-0f4a-4725-8106-db0b6cca69e3] File "/etc/libvirt/hooks/qemu", line 20, in <module> TRACE nova.compute.manager [instance: dbef4afc-0f4a-4725-8106-db0b6cca69e3] from cloudutils.configFileOps import configFileOps TRACE nova.compute.manager [instance: dbef4afc-0f4a-4725-8106-db0b6cca69e3] ImportError: No module named cloudutils.configFileOps

It looks to me like I am missing some python module, but I am pretty sure I installed all packages according to the Openstack Kilo Redhat/CentOS installation manual.

Does anybody have an idea what I could have done wrong there or what I might have forgotten?

Thanks in advance for any help.