Ask Your Question

Software Deployment in Heat: Problem with os-collect-config

asked 2017-02-02 14:13:13 -0500

nrado gravatar image

updated 2017-02-03 09:15:08 -0500

mathias gravatar image

Hi all,

For the purpose of testing SoftwareConfig and SoftwareDeployments with Heat I started to build a custom Xenial image with Diskimage-builder. I followed this guide ( and ran disk-image-create like this:

diskimage-builder/bin/disk-image-create vm ubuntu os-collect-config os-refresh-config os-apply-config heat-config heat-config-script package-installs -t raw -o ubuntu-software-config.raw

When I tried to deploy my Heat stack applying software deployments, the deployment resources are stuck in "Create In Progress" state. I suppose that the stack is waiting for a signal via the Heat API notifying the service of success and this is not happening here.

This is the stack I test with:

I looked into the deployed machine and found out that the AUTH_URL in the [heat] section of os-collect-config.conf points to the internal URL of my Keystone service:

command = os-refresh-config
collectors = ec2
collectors = heat
collectors = local

auth_url =
user_id = a10a8bb5d98c49758ef672be40e37a60
password = e8624d4cd01748d599183eb7432c9c22
project_id = 24194fcc24264fcd8c2885093bdca553
stack_id = deploy-test/5423fe82-6457-4823-a958-259467efa3a5
resource_name = node

The address of cannot be reached from an instance by design.

I want to understand two things:

  1. How does the auth_url get into the instance?
  2. Assuming it gets there by being injected by Heat in some way, how can I configure Heat to set the public URL?
edit retag flag offensive close merge delete


Added more information to initial post.

mathias gravatar imagemathias ( 2017-02-03 08:03:29 -0500 )edit

I want to share our heat.conf: I believe the clients_* sections might have to do with it but I struggle to find any documentation on that.

mathias gravatar imagemathias ( 2017-02-03 12:33:28 -0500 )edit

4 answers

Sort by ยป oldest newest most voted

answered 2017-02-11 09:21:35 -0500 gravatar image

updated 2017-02-11 14:50:22 -0500

Hi NRado,

"auth_url" comes from auth_uri variable in [clients_keystone] section of heat.conf, in my case it looks in the way:

auth_uri =

Heat's "Software Deployment" configs are available from endpoints list (public one):

doka@lagavulin:~$ openstack endpoint list |grep heat
||| heat-cfn| cloudformation | | admin     | http://controller:8000/v1
||| heat    | orchestration  | | internal  | http://controller:8004/v1/%(tenant_id)s
||| heat-cfn| cloudformation | | internal  | http://controller:8000/v1
||| heat    | orchestration  | | admin     | http://controller:8004/v1/%(tenant_id)s
||| heat-cfn| cloudformation | | public    |
||| heat| orchestration  | | public    |

So, check your auth_uri@heat.conf variable and endpoints whether they contain accessible from your VM hostnames in URLs to make it possible to work Heat's software configuration.

Hope this'll help.

edit flag offensive delete link more


Hi Volodymyr,

I wanted to correct your first answer, but you beat me to it. I checked the code of os-collect-config ( and came to the same conclusion as you. Indeed, it is the auth_uri from the clients_keystone section. Thanks anyway!

nrado gravatar imagenrado ( 2017-02-11 15:31:06 -0500 )edit

Hi NRado, thanks :) gravatar ( 2017-02-12 15:06:00 -0500 )edit

answered 2017-02-03 10:55:02 -0500

zaneb gravatar image

It looks like it uses the auth_url of the token that you pass to Heat when you create the stack. So if you're using the InternalURL in your local credentials when you connect to Heat, try using the PublicURL instead.

edit flag offensive delete link more


Thanks for you answer. I checked my rc file. OS_AUTH_URL points to the public URL of Keystone. I also ran "openstack stack create" with "--debug" and checked all curl commands. They all point to the public URL.

mathias gravatar imagemathias ( 2017-02-03 12:10:57 -0500 )edit

Hmm, I'm not sure where it's coming from then :(

zaneb gravatar imagezaneb ( 2017-02-03 13:58:53 -0500 )edit

HI all do you have a clear solution to this problem? we have queens. I don't undstu this post .

Ange gravatar imageAnge ( 2019-03-22 08:02:50 -0500 )edit

answered 2020-04-27 22:54:30 -0500

Saibal Dey gravatar image

The root cause of the issue is hostname/DNS resolution fails for auth-URLs & few other service URLS (magnum,heat etc). Generally while configuring the OpenStack services we use "controller" instead of the IP, like example:

For Keystone: keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ --bootstrap-admin-url http://controller:5000/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne

For Magnum: openstack endpoint create --region RegionOne \ container-infra public http://controller:9511/v1

So when the k8s master node gets provisioned (as VM) it creates couple of containers, "heat-container-agent" which interns use those able mentioned configs to get the k8s registered & configured with the OpenStack. As the container can't resolve the "controller" DNS entry, so all communications with OpenStack API fails. So there are couple of fixes for this: 1. Configure an internal DNS so resolute the "controller" URLs or 2. Use IP instead of "controller" in those able mentioned configs.

Option 2 is for POC & #1 should be considered for the production or HA OpenStack clusters.

We opted for #2 for our POC purpose. Reconfigured the Keystone, heat & magnum with IP instead of "controller" hostname.

keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ --bootstrap-admin-url \ --bootstrap-internal-url \ --bootstrap-public-url \ --bootstrap-region-id RegionOne


openstack endpoint create --region RegionOne \ container-infra public openstack endpoint create --region RegionOne \ container-infra internal openstack endpoint create --region RegionOne \ container-infra admin

& finally the configs looks like: root@svc-datarobot-srv-1:~# openstack endpoint list | grep 500 | 1eef88762bfb4fb6beb15a013bf8848b | RegionOne | keystone | identity | True | public | | | a138267c2b9049b28b76763779391b22 | RegionOne | keystone | identity | True | admin | | | a344d28bb4ad4815a6ab1aaaa3293da2 | RegionOne | keystone | identity | True | internal | | root@svc-datarobot-srv-1:~# openstack endpoint list | grep magnum | 55659785c9bf422d8ebaa73b8616fb32 | RegionOne | magnum | container-infra | True | admin | | | ad302747086b4279bdbf929b9c08d668 | RegionOne | magnum | container-infra | True | public | | | b342972c7ef04c3ea1e8d72a4a3e6e70 | RegionOne | magnum | container-infra | True | internal | |

If you want to debug more, you can ssh into the k8s master (VM with a fedora image) & check the "cloud-init-output.log".

edit flag offensive delete link more

answered 2019-03-22 07:56:48 -0500

Ange gravatar image

HI all do you have a clear solution to this problem? we have queens?


edit flag offensive delete link more

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2017-02-02 14:13:13 -0500

Seen: 2,574 times

Last updated: Feb 11 '17