mathias's profile - activity

2019-11-02 07:14:28 -0500 received badge  Notable Question (source)
2019-11-02 07:14:28 -0500 received badge  Famous Question (source)
2019-04-18 14:20:04 -0500 received badge  Notable Question (source)
2019-04-18 14:20:04 -0500 received badge  Famous Question (source)
2018-08-20 08:57:04 -0500 received badge  Notable Question (source)
2018-08-20 08:57:04 -0500 received badge  Famous Question (source)
2018-07-10 04:50:12 -0500 received badge  Notable Question (source)
2018-07-10 04:50:12 -0500 received badge  Famous Question (source)
2018-05-29 14:47:14 -0500 received badge  Nice Answer (source)
2018-04-16 14:14:53 -0500 received badge  Notable Question (source)
2018-04-16 14:14:53 -0500 received badge  Famous Question (source)
2018-04-04 18:41:05 -0500 received badge  Popular Question (source)
2018-03-19 13:47:46 -0500 asked a question ERROR heat.engine.resource WaitConditionTimeout: 0 of 1 received

Hi everyone, I want to start a new thread about the error above. I know there is a thread with the same question, but people seem to mistake the problem for a signalling issue. Let's start over:

I received a bug report from one of your users about a Heat stack failing that used to work previously. Of course, my first thought was about the signal call not getting from the VM to the Heat for some reason (wrong API (cfn), SSL verification failure, no connectivity at all, etc) but I was quickly able to rule those out. Reason: The Heat dependency graph would be something like

OS::Nova::Server -> OS::Heat::SoftwareConfig -> OS::Heat::WaitConditionHandle -> OS::Heat::WaitCondition

and I could see that neither of OS::Nova::Server, OS::Heat::SoftwareConfig and OS::Heat::WaitConditionHandle was created at all. So it could not be an issue with the signalling itself, but with the creation of the OS::Heat::WaitCondition without which all the other resources wouldn't be created in the first place.

I dug into the logs and found this:

2018-03-19 19:27:34.085 22 DEBUG heat.engine.scheduler [req-381316ce-00fc-43f6-8c51-d5b7e1873dd6 - lab-xlarge-1 - - -] Task create from HeatWaitCondition "wait_condition" Stack "test" [639dd987-4c12-46a8-873e-10488cb57d83] running step /usr/lib/python2.7/dist-packages/heat/engine/
2018-03-19 19:27:34.107 22 DEBUG heat.engine.scheduler [req-381316ce-00fc-43f6-8c51-d5b7e1873dd6 - lab-xlarge-1 - - -] Task create from HeatWaitCondition "wait_condition" Stack "test" [639dd987-4c12-46a8-873e-10488cb57d83] sleeping _sleep /usr/lib/python2.7/dist-packages/heat/engine/
2018-03-19 19:27:35.108 22 DEBUG heat.engine.scheduler [req-381316ce-00fc-43f6-8c51-d5b7e1873dd6 - lab-xlarge-1 - - -] Task create from HeatWaitCondition "wait_condition" Stack "test" [639dd987-4c12-46a8-873e-10488cb57d83] running step /usr/lib/python2.7/dist-packages/heat/engine/
2018-03-19 19:27:35.132 22 INFO heat.engine.resources.openstack.heat.wait_condition [req-381316ce-00fc-43f6-8c51-d5b7e1873dd6 - lab-xlarge-1 - - -] HeatWaitCondition "wait_condition" Stack "test" [639dd987-4c12-46a8-873e-10488cb57d83] Timed out (0 of 1 received)
2018-03-19 19:27:35.133 22 INFO heat.engine.resource [req-381316ce-00fc-43f6-8c51-d5b7e1873dd6 - lab-xlarge-1 - - -] CREATE: HeatWaitCondition "wait_condition" Stack "test" [639dd987-4c12-46a8-873e-10488cb57d83]
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource Traceback (most recent call last):
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource   File "/usr/lib/python2.7/dist-packages/heat/engine/", line 763, in _action_recorder
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource     yield
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource   File "/usr/lib/python2.7/dist-packages/heat/engine/", line 865, in _do_action
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource     yield self.action_handler_task(action, args=handler_args)
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource   File "/usr/lib/python2.7/dist-packages/heat/engine/", line 352, in wrapper
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource     step = next(subtask)
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource   File "/usr/lib/python2.7/dist-packages/heat/engine/", line 816, in action_handler_task
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource     done = check(handler_data)
2018-03-19 19:27:35.133 22 ERROR heat.engine.resource   File "/usr/lib ...
2018-02-24 05:50:50 -0500 marked best answer RabbitMQ: Number of connections

Hi, I was just looking at my RabbitMQ and noted an number of connections that seemed unusual to me:

root@amqp01:~# rabbitmqctl list_connections
Listing connections ...
openstack     35057   running
openstack     35058   running
openstack     35059   running
openstack     35060   running
openstack     35061   running
openstack     35062   running
openstack     44562   running
openstack     44563   running
openstack     57400   running
openstack     57401   running
openstack     57402   running
openstack     57403   running
openstack     57404   running
openstack     57405   running
openstack     57406   running
openstack     57407   running
openstack     57408   running
openstack     57409   running
openstack     57410   running
openstack     57411   running
openstack     57412   running
openstack     57413   running
openstack     57414   running
openstack   57087   running
openstack   57088   running
openstack   57089   running
openstack   57090   running
openstack   57091   running
openstack   57092   running
openstack   57093   running
openstack   57094   running
openstack   57095   running
openstack   57096   running
openstack   56637   running
openstack   56638   running
openstack   56639   running
openstack   56640   running
openstack   56641   running
openstack   56642   running
openstack   56643   running
openstack   56644   running
openstack   56645   running
root@amqp01:~# nova api node neutron api node neuton network noe kvm00 kvm01

The number of existing connections as well as the lack of any connection from (Keystone) made me suspicious. I have no working environment I can use to compare, so please give some advice. I am also having other problems with rabbitmq atm and I was thinking this might bring me on the right tack.

2018-02-24 05:50:43 -0500 received badge  Nice Question (source)
2018-02-18 07:40:17 -0500 asked a question dnsmasq: prefer hosts file, then forward

I have a situation with Neutron and Designate where the dnsmasq process running in the tenant network seems to be forwarding requests to before it actually consults its addn_hosts file:

$ ps aux | grep dnsmasq | grep 06188a34-399f-4bef-aabc-687bb934ab02
nobody    7340  0.0  0.0  49992  2868 ?        S    13:27   0:00 dnsmasq --no-hosts --no-resolv --strict-order --except-interface=lo --pid-file=/var/lib/neutron/dhcp/06188a34-399f-4bef-aabc-687bb934ab02/pid --dhcp-hostsfile=/var/lib/neutron/dhcp/06188a34-399f-4bef-aabc-687bb934ab02/host --addn-hosts=/var/lib/neutron/dhcp/06188a34-399f-4bef-aabc-687bb934ab02/addn_hosts --dhcp-optsfile=/var/lib/neutron/dhcp/06188a34-399f-4bef-aabc-687bb934ab02/opts --dhcp-leasefile=/var/lib/neutron/dhcp/06188a34-399f-4bef-aabc-687bb934ab02/leases --dhcp-match=set:ipxe,175 --bind-interfaces --interface=tap5ba1f53d-fc --dhcp-range=set:tag0,,static,86400s --dhcp-option-force=option:mtu,1500 --dhcp-lease-max=256 --conf-file=/etc/neutron/dnsmasq.conf --server= --server=

$ cat /etc/neutron/dnsmasq.conf

$ cat /var/lib/neutron/dhcp/06188a34-399f-4bef-aabc-687bb934ab02/addn_hosts host-192-168-0-1 host-192-168-0-10 vm1 host-192-168-0-11 vm3

When I run nslookup from a VM, I see the following in dnsmasq.log immediately:

Feb 18 13:37:00 dnsmasq[119]: 5 query[PTR] from
Feb 18 13:37:00 dnsmasq[119]: 5 /var/lib/neutron/dhcp/06188a34-399f-4bef-aabc-687bb934ab02/addn_hosts is
Feb 18 13:37:00 dnsmasq[119]: 6 query[AAAA] from
Feb 18 13:37:00 dnsmasq[119]: 6 forwarded to

The nslookup returns after approx. 30s with the correct response.

How I can configure the order of precedence to be exactly the opposite?

2017-12-09 04:45:50 -0500 commented answer Heat: Using YAQL on SoftwareConfig Json Outputs

Your expression gives me an output_value of "null" and output_error is "u'interfaces'". It might be the case what your were suggesting last. A bug.

2017-12-07 09:09:24 -0500 asked a question Heat: Using YAQL on SoftwareConfig Json Outputs

Hi, my goal is to run scripts in my instances via software deployments that output the names of the network interfaces that connect to particular subnets. Here's the script:

#!/usr/bin/env python

import subprocess
import re
import os

p = subprocess.Popen(['ip', 'route', 'list'], stdout=subprocess.PIPE)
out, err = p.communicate()
lines = out.split('\n')

interfaces = []
for line in lines:
  if re.match("^[0-9./]* dev", line):
    tiles = line.split(" ")
    interfaces.append({'nic': tiles[2], 'cidr': tiles[0]})

result = {
  'interfaces': interfaces,

path = os.environ['heat_outputs_path'] + ".map"
file = open(path, "w")

The output looks somthing like

  'interfaces': [
    { 'nic': 'ens3', 'cidr': '' },
    { 'nic': 'ens5', 'cidr': '' }

Now I am trying to create an output for my Heat template that returns the name of the interface in the network, so I expect the value of the output to be "ens3".

I haven't managed anything else yet but to have yaql return the entire data structure again :D

$ openstack stack output show test-nic-map --all
| Field | Value                                                                             |
| map   | {                                                                                 |
|       |   "output_value": "{'interfaces': [{'nic': 'ens3', 'cidr': ''}]}",  |
|       |   "output_key": "map",                                                            |
|       |   "description": "No description given"                                           |
|       | }                                                                                 |
| nic   | {                                                                                 |
|       |   "output_value": "{'interfaces': [{'nic': 'ens3', 'cidr': ''}]}",  |
|       |   "output_key": "nic",                                                            |
|       |   "description": "No description given"                                           |
|       | }                                                                                 |

All other tries miserably failed. Can someone give me a hint how to do this?


        expression: "$.data"
        data: { get_attr: [ nic_map, map ] }


I just tried


        expression: "$.data.interfaces.where($.cidr = '')"
        data: { get_attr: [ nic_map, map ] }

which fails with

$openstack stack output show test-nic-map --all
| Field | Value                                                                             |
| map   | {                                                                                 |
|       |   "output_value": "{'interfaces': [{'nic': 'ens3', 'cidr': ''}]}",  |
|       |   "output_key": "map",                                                            |
|       |   "description": "No description given"                                           |
|       | }                                                                                 |
| nic   | {                                                                                 |
|       |   "output_value": null,                                                           |
|       |   "output_error": "Unknown function \"#property#interfaces\"",                    |
|       |   "output_key": "nic",                                                            |
|       |   "description": "No description given"                                           |
|       | }                                                                                 |

I am getting the feeling I am using it wrong.

2017-07-24 21:10:34 -0500 received badge  Popular Question (source)
2017-06-03 06:25:42 -0500 asked a question Cinder Migration: Could not determine a suitable URL

Hi, I am trying to Cinder migration of attached volumes to work in Ocata between two Ceph clusters. Migrating detached volumes works fine but it currently fails when they're attached.

My cinder.conf looks like this:

enabled_backends = standard,standard2
auth_uri =
auth_url =
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = **********************
memcache_security_strategy = ENCRYPT
memcache_secret_key = **********************
memcached_servers =,,
rbd_ceph_conf = /etc/ceph/ceph.conf
rbd_user = cinder
backend_host = rbd:volumes
rbd_pool = volumes
volume_backend_name = standard
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = 827ef4dc-1441-4633-aaf0-16cd1ebffa07

rbd_ceph_conf = /etc/ceph/ceph2.conf
rbd_user = cinder2
backend_host = rbd:volumes
rbd_pool = volumes
volume_backend_name = standard2
volume_driver = cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = 4d48e8c0-3557-424d-93e7-5b2f98a6f5ae

First, it failed with something like "Endpoint not found" in one of the cinder logs. I could fix that by setting nova_endpoint_template and nova_endpoint_admin_template as suggested in one of the questions asked here. I set if equal to that is set in Keystone's service catalog which works fine:

nova_endpoint_template =
nova_endpoint_admin_template =

Then I got a "key error" for "tenant_id". Checking the default value of these in the configuration reference, I found out it had to be "project_id" so I changed that to

nova_endpoint_template =
nova_endpoint_admin_template =

It's weird since Keystone shows "tenant_id" and it works, but okay. I not got past the "key error" but hit the next message I cannot figure out:

DiscoveryFailure: Could not determine a suitable URL for the plugin

Of course, this line only doesn't help alot, so I uploaded everything I got after running

openstack volume migrate --host rbd:volumes@standard2#standard2 vol1

in any of Cinder's logs on the controller:

Any ideas are very welcome.

cheers Mathias

2017-05-18 06:52:33 -0500 received badge  Famous Question (source)
2017-05-06 18:38:13 -0500 received badge  Famous Question (source)
2017-04-12 10:14:13 -0500 received badge  Famous Question (source)
2017-04-06 03:09:05 -0500 commented answer diskimage-builder: Software Deployments Broken In Xenial

Nobody has any ideas?

2017-03-30 02:37:57 -0500 commented answer diskimage-builder: Software Deployments Broken In Xenial

Sorry, was busy for a while: This is what I ran: Here is the output:

2017-03-30 02:20:01 -0500 received badge  Notable Question (source)
2017-03-16 17:02:30 -0500 received badge  Popular Question (source)
2017-03-16 12:29:43 -0500 commented answer diskimage-builder: Software Deployments Broken In Xenial

Thanks for your response, I tried your suggestion: The output I get is No image is created at all

2017-03-16 11:16:39 -0500 commented question diskimage-builder: Software Deployments Broken In Xenial

To fix the absence of python-keystoneclient just add the package to

2017-03-16 09:53:58 -0500 commented question diskimage-builder: Software Deployments Broken In Xenial

Actually, I just noticed that trusty doesn't work either. It's still not python-requests but it cannot finde keystone cient:

  File "/usr/local/bin/heat-config-notify", line 143, in main
    ks = ksclient.Client(
AttributeError: 'NoneType' object has no attribute 'Client'
2017-03-16 08:08:21 -0500 commented question diskimage-builder: Software Deployments Broken In Xenial

I read your post before I wrote this and figured the problems are different. As described above all works fine for 14.04, so you should not see any problems with the python-requests package.

2017-03-16 06:17:53 -0500 asked a question diskimage-builder: Software Deployments Broken In Xenial

I am trying to build an image for ubuntu trusty and xenial that works with OS::Heat::SoftwareDeployment using diskimage-builder. I installed diskimage-builde with

pip install diskimage-builder

an run the following scripts to build the images:

export DISTRIBUTION="ubuntu"
export DIB_RELEASE="trusty"

git clone
git clone

export ELEMENTS_PATH=tripleo-image-elements/elements:heat-agents/
export BASE_ELEMENTS="$DISTRIBUTION selinux-permissive"
export AGENT_ELEMENTS="os-collect-config os-refresh-config os-apply-config"
export DEPLOYMENT_BASE_ELEMENTS="heat-config heat-config-script heat-config-ansible"

TIMESTAMP=$(date +%Y%m%d%H%M%S)

disk-image-create vm $BASE_ELEMENTS $AGENT_ELEMENTS \

openstack image create --disk-format raw --container-format bare \

To build xenial I just set DIB_RELEASE to "xenial". The build process runs fine and the images a uploaded. Then I deploy instances from both images and check the os-collect-config service. On trusty the service is running and executing os-collect-config manually results in a successful run as expected.

On xenial though, this is what I see:

root@test:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.2 LTS
Release:    16.04
Codename:   xenial
root@test:~# os-collect-config --debug --force
Source [heat_local] Unavailable.
Traceback (most recent call last):
  File "/usr/local/bin/os-collect-config", line 11, in <module>
  File "/opt/stack/venvs/os-collect-config/lib/python3.5/site-packages/os_collect_config/", line 262, in __main__
  File "/opt/stack/venvs/os-collect-config/lib/python3.5/site-packages/os_collect_config/", line 166, in collect_all
    content = module.Collector(**collector_kwargs).collect()
  File "/opt/stack/venvs/os-collect-config/lib/python3.5/site-packages/os_collect_config/", line 75, in collect
    md = config_drive.get_metadata()
  File "/opt/stack/venvs/os-collect-config/lib/python3.5/site-packages/os_collect_config/", line 173, in get_metadata
    bd = config_drive()
  File "/opt/stack/venvs/os-collect-config/lib/python3.5/site-packages/os_collect_config/", line 166, in config_drive
    for bd in all_block_devices():
  File "/opt/stack/venvs/os-collect-config/lib/python3.5/site-packages/os_collect_config/", line 155, in all_block_devices
    for device in out.split('\n\n'):
TypeError: a bytes-like object is required, not 'str'

I have no idea what this means or where it might come from. Any help is very much appreciated.

2017-03-06 11:27:57 -0500 received badge  Notable Question (source)
2017-02-28 11:27:30 -0500 commented question "An error happened in the driver" -> when I try to create l7 policy

I have the same problem and wanted to ask the same question. Still hoping for answers :) push

2017-02-08 21:58:07 -0500 received badge  Self-Learner (source)
2017-02-08 21:58:05 -0500 marked best answer Neutron Internal DNS

I tried to get Neutron's internal DNS running but have some problems. I made two changes in the Neutron configuration as described in

neutron.conf: dns_domain =

ml2_conf.ini [ml2] extension_drivers = port_security,dns

I seemed to have connectivity problems to newly created VMs, therefore I removed the port_security extension and it worked again. Is port_security somehow related to dns?

To test the functionality, I created a network, subnet and router and tracked the contents to the respective dnsmasq hosts file:


This looks alright to me. The three hosts represent two dns servers and the default gateway. Then I added an instance vie Horizon, found the corresponding neutron port and looked at the details:

| Field                 | Value                                                                                          |
| admin_state_up        | True                                                                                           |
| allowed_address_pairs |                                                                                                |
| binding:vnic_type     | normal                                                                                         |
| created_at            | 2017-01-28T14:13:23                                                                            |
| description           |                                                                                                |
| device_id             | fa852010-61ff-4b92-b404-2c09c7da9c06                                                           |
| device_owner          | compute:zone00                                                                                 |
| dns_assignment        | {"hostname": "dnstest-2", "ip_address": "", "fqdn": ""} |
| dns_name              | dnstest-2                                                                                      |
| extra_dhcp_opts       |                                                                                                |
| fixed_ips             | {"subnet_id": "12a7b5d1-3e6b-4b59-8c56-dd7825cd4f48", "ip_address": ""}           |
| id                    | 3e2352e9-27ce-4d34-a50f-5808e2477ce2                                                           |
| mac_address           | fa:16:3e:ae:00:1c                                                                              |
| name                  |                                                                                                |
| network_id            | 3b823d53-be85-418e-8b88-cad8217360ab                                                           |
| security_groups       | 35b2fb2e-b539-4338-998e-5289750580b2                                                           |
| status                | ACTIVE                                                                                         |
| tenant_id             | 70e60cbf343d486d944534025f824f76                                                               |
| updated_at            | 2017-01-28T14:13:39                                                                            |

The port shows everthing as expected: but DNSmasq's hosts file was extended by this line:


The hostname is "host-192-168-102-5" instead of "dnstest-2" and the domain is "openstacklocal" instead of "". Why does it seem to be working for stuff created on the network nodes but not for instances?

cheers Mathias

2017-02-08 21:55:59 -0500 received badge  Popular Question (source)
2017-02-07 06:50:53 -0500 received badge  Popular Question (source)
2017-02-07 02:51:03 -0500 asked a question Heat: get_attr - Get All Ports

Hi folks, I've got an OS::Nova::Server and need to get a list of all ports it has. I found that I can get to that info via the "addresses" attribute. This attribute is map with network names and IDs als keys. "Port" is field in each map. The challenge is to retrieve all ports of that instance irrespectively and unknowingly of the network name or ID. So I am looking for something like a wildcard get_attr like

get_attr: [ myserver, addresses, *, port ]

Does something similar exist?

2017-02-06 07:32:36 -0500 marked best answer Neutron: Linux Bridge and VLANs

Hi, I am trying to set up Neutron with Linux Bridges and VLANs for segmentation. It is all running in the same box, so all files are valid for all neutron services. Here my configuration:


verbose = True
debug = True
lock_path = $state_path/lock
bind_host =
bind_port = 9696
core_plugin = ml2
service_plugins = router
auth_strategy = keystone
allow_overlapping_ips = False
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url =
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = a59b52e659aa482b92eb74e95394a1 a59b52e659aa482b92eb74e95394a17aa
nova_admin_password = OpenStack123
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
auth_host =
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = OpenStack123
connection = mysql://neutron:OpenStack123@


interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
use_namespaces = True 
external_network_bridge =


interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True


type_drivers = flat,vlan
tenant_network_types = vlan
mechanism_drivers = linuxbridge
flat_networks = physnet1
network_vlan_ranges = physnet1,physnet2:100:200
physical_interface_mapping = physnet1:eth0,physnet2:br-transport
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = False
sql_connection = mysql://neutron:OpenStack123@

I create my provider network like this:

neutron net-create public01 --provider:network_type=flat --provider:physical_network=physnet1 --router:external True
neutron subnet-create --name public01_subnet --gateway public01 --disable-dhcp --allocation-pool start=,end=

Then I create a router:

neutron router create router01

When I set the gateway for the router using

neutron router-gateway-set router01 public01

I get no explicit error message from the client, but

1) It's not working: I cannot ping the IP on the external network 2) I see some messages in the logs that caught my attention:

2015-01-30 13:31:24.840 5263 WARNING neutron.agent.securitygroups_rpc [req-ccb8c730-c3cc-4e91-a3dc-05f0a4bc672e None] Driver configuration doesn't match with enable_security_group
2015-01-30 13:31:21.672 5092 WARNING neutron.plugins.ml2.managers [req-cc768eb4-0743-4da3-98ef-63bd6e26847f None] Failed to bind port fde9b12c-1b5d-4ae1-98bb-e687faf655dd on host cloud
2015-01-30 13:31:21.694 5092 WARNING neutron.plugins.ml2.plugin [req-cc768eb4-0743-4da3-98ef-63bd6e26847f None] In _notify_port_updated(), no bound segment for port fde9b12c-1b5d-4ae1-98bb-e687faf655dd on network 3ed8024b-1649-4ed0-b931-71566ee8b3cc
2015-01-30 13:31:24.877 5092 WARNING neutron.plugins.ml2.rpc [req-ccb8c730-c3cc-4e91-a3dc-05f0a4bc672e None] Device tapfde9b12c-1b requested by agent lb00012e4dd15f on network 3ed8024b-1649-4ed0-b931-71566ee8b3cc not bound, vif_type: binding_failed

server.log: l3_agent.log: linuxbridge_agent.log:

Anybody knows what's wrong?

2017-02-06 07:32:36 -0500 received badge  Nice Answer (source)
2017-02-06 04:14:45 -0500 commented question Glance not working with swift.

The only really half-useful log message here is "503 Service Unavailable" which looks like the load balancer does not have any active pool members or something. I suggest you enable verbose and debug logging, then try again and also post the CLI command you're running with "--debug"

2017-02-06 04:12:00 -0500 answered a question Openstack from Mirantis for stability

You could certainly argue that commercial distributions lead to a more stable cloud since their reduction of functionality and architecture somewhat restricts you but also restricts the potential for misconfiguration or simply bad architecture. Also the fact that those distributions are always behind the release schedule of OpenStack allows you to benefit from fixes backported into the code base you're running. At the same time, of course, there might be functionality you need that is not supported or not there at all.

2017-02-06 04:08:17 -0500 answered a question What would happen if there was no Heat (Orchestration) in OpenStack?

I ran OpenStack without Heat for a long time. No other service needs it, it is simply useful for users of your cloud.

2017-02-06 04:06:53 -0500 answered a question What exactly is a physnet ? Why is it used to specify a provider network ? Why cant we use the ethernet interfaces like eth0,eth1 directly to identify the physical network ? And where is the mapping of physnet implemented in openstack ?

A physnet is a physical network as opposed to a virtual one created through protocols like GRE or VXLAN. As VXLAN and GRE are IP-based protocols there is no need to tell the system about any NIC or similar. It will just know how to connect since the target IP address is known and the rest can be looked up via the system's routing table. For other network types such as flat or vlan, the driver needs to know for example "where" to create the VLAN. In such cases, you define a physnet that could be called "physnet1" and configure ML2 to create VLAN-type networks on that physnet. The physnet is mapped to an interfaces such as eth0, bond0 or similar depending on your network configuration.

Look at this example configuration I am running:

type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = openvswitch,l2population
path_mtu = 1550

network_vlan_ranges = physnet1:123:123,physnet1:234:234

flat_networks = physnet1

vni_ranges = 1:1000
vxlan_group =

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

tunnel_types = vxlan
l2_population = true
arp_responder = true

bridge_mappings = physnet1:br-ex
local_ip =

I define one physnet "physnet1" which maps to "br-ex" which is an Open vSwitch bridge automatically create by Neutron.You can see that flat and vlan type sections reference physnet1. The VXLAN section does not.

2017-02-03 12:33:28 -0500 commented question Software Deployment in Heat: Problem with os-collect-config

I want to share our heat.conf: I believe the clients_* sections might have to do with it but I struggle to find any documentation on that.

2017-02-03 12:10:57 -0500 commented answer Software Deployment in Heat: Problem with os-collect-config

Thanks for you answer. I checked my rc file. OS_AUTH_URL points to the public URL of Keystone. I also ran "openstack stack create" with "--debug" and checked all curl commands. They all point to the public URL.