Ask Your Question

zaneb's profile - activity

2019-09-16 21:51:32 -0500 commented answer HEAT template with OS::Cinder::Quota failing

Please raise a bug. It should allow None as a value, because that occurs during validation.

2019-09-05 10:20:16 -0500 commented answer Create Multiple instance with fixed IP and port in openstack using heat template

Sorry, forgot that yaml requires you to quote the string when it begins with %. Fixed now.

2019-09-03 10:10:40 -0500 answered a question Heat stack falling with volume in use

The size of a volume in an OS::Nova::Server cannot be updated in place, and hence changing it will result in the server resource being replaced with a new one.

2019-08-26 16:18:00 -0500 answered a question Create Multiple instance with fixed IP and port in openstack using heat template

Because intrinsic functions are evaluated before index substitution, you can only select the IP by index inside the sm_port.yaml template. So you need to add another parameter index to sm_port.yaml and pass it along with the full list of IPs like this:

          index: "%index%"
          sm_oam_ip: {get_param: sm_oam_ip}

then inside sm_port.yaml you can select the right IP by doing:

              fixed_ips:
                - {ip_address: {get_param: sm_oam_ip, {get_param: index}}}
2019-08-14 08:25:21 -0500 received badge  Famous Question (source)
2019-07-29 10:54:04 -0500 commented answer How to avoid accidental removal of the stack with the heat stack-delete

One thing you can do is set deletion_policy: retain on servers so that if the stack does get deleted the servers are not removed. (If you really want to delete you can always update the stack again to remove the deletion policy first.)

2019-07-26 14:56:48 -0500 answered a question HEAT template with OS::Cinder::Quota failing

Try using:

project: {get_resource: Dev}
2019-07-22 18:46:45 -0500 answered a question How to avoid accidental removal of the stack with the heat stack-delete

There's no way to lock a stack. The stack delete command does prompt for confirmation if run from an interactive shell, to try to reduce accidents.

2019-07-17 10:25:20 -0500 answered a question create a bootable volume from an image and launch an instance from this volume

You don't need both a block_device_mapping property and an OS::Cinder::VolumeAttachment resource. Pick one.

2019-06-28 16:07:48 -0500 commented answer HEAT template to attach an unknown number of ports (opened on different networks) to a server

It doesn't work because the result of get_attr is None during validation. The only workaround I can think of is to push the server into a nested template, so that you can use get_param in the for_each list, and pass the ResourceGroup attribute in as a parameter.

2019-06-20 09:05:25 -0500 answered a question Heat Template - Can we give hostname with comma separated from which instance should start with hostname which is specified

Yes, this is possible (TripleO uses this trick), but I wouldn't recommend it.

First, you must define the scaled unit of your ResourceGroup to be a Heat template (rather than an OS::Nova::Server - the template will contain the server). The properties you pass to the scaled resource become the parameters to the template, so in addition to any parameters you want to pass through to the Server you also need to pass the list of hostnames and the index, which is represented by the string "%index%" (unless you change the index_var property). Then, inside the template you can get the right hostname from the list by doing {get_param: [hostnames, {get_param: index}]}.

However, IMHO the need to assign predetermined hostnames is overrated. Doing so means you have to manually manage the hostnames to always keep the list the same length as the size of the ResourceGroup, and skip any indices that have been blacklisted. It's easier to just define a hostname that includes %index% and have Heat automatically assign them. It's even better to not specify a name at all and let Heat choose one based on the name of the stack and the index. Finally, ResourceGroup itself is overrated, because it uses various suboptimal techniques when doing e.g. rolling updates purely for the purposes of fussily maintaining the contiguous list of indices. Personally, I would just use an AutoScalingGroup and accept the default randomly-generated hostnames.

2019-06-20 08:45:18 -0500 answered a question OpenStack heat get ResourceGroup ip address and set to params

I don't see any reason your template shouldn't work, but a simpler way than using str_replace would be to define an input to your software config and pass the value to the deployment:

 get_ip:
    type: OS::Heat::SoftwareConfig
    properties:
      group: script
      inputs:
        - name: minion_group_ip
      config: 
          #!/bin/bash
          echo ${minion_group_ip} > /opt/ip.txt

 deployment:
    type: OS::Heat::SoftwareDeployment
    properties:
      signal_transport: HEAT_SIGNAL
      config: {get_resource: get_ip}
      server: {get_resource: master}
      input_values:
        minion_group_ip: {get_attr: [minion_group, first_address]}
2019-05-29 15:42:47 -0500 answered a question publicURL endpoint for orchestration service in null region not found

This was due to a regression in Heat when multi-region support was added for Software Deployments. Setting the region_name_for_services explicitly in heat.conf (and of course reloading heat-engine) is a good workaround. This patch should eliminate the issue.

2019-05-15 15:08:03 -0500 commented answer increasing size of user data in nova api

I'm not a member of the Nova team, but: you have to have some limit, and somebody will always want to go over it; 64k is already quite generous (AWS allows only 16k); it's been like this ~forever and nothing bad has happened.

2019-05-15 15:00:54 -0500 answered a question heat orchestration

Historically there have been two uses of the term 'orchestration' in the OpenStack community. I suspect that this may be at the root of your question.

The first (and wrong ;) use of the term was to describe a way to organise the internal state of the Nova API to keep track of the various external things it has to co-ordinate in order to start an instance - nova-compute, cinder, neutron, &c. This is not what Heat is doing.

Heat's purpose is to orchestrate the end-user's application: it allows you to provide a declarative model of the desired topology, and then infers the dependencies and makes all of the API calls necessary to make it a reality (as well as keeping track of errors, allowing updates to the model, and so on).

2019-05-07 19:33:27 -0500 answered a question increasing size of user data in nova api

The SQL schema is here, and to update it you'll also need to add a new DB migration.

The limit is almost certainly entirely unrelated to the size of a flavor description; I would start by looking at the user_data field in the Instance table.

You should probably be aware that the chances of any change to the limit being accepted upstream are... indistinguishable from zero, though of course you're welcome to try.

Devstack is a collection of shell scripts for building an OpenStack installation for testing purposes, and is not related to your issue.

2019-05-02 04:57:54 -0500 received badge  Nice Answer (source)
2019-04-18 07:35:27 -0500 answered a question Use a Heat condition to exclude item from list

Yes, you can using the if macro (you're already correctly creating the resources conditionally). I'm not 100% sure it's possible without some duplication, but this should certainly work:

server_init:
  type: OS::Heat::MultipartMime
  properties:
    parts:
      if:
        - playbook_included
        - - config: { get_attr: [ssh_keys_admins, resource.cloud_config_ssh] }
          - config: { get_attr: [playbook_runner, resource.playbook_runner] }
        - - config: { get_attr: [ssh_keys_admins, resource.cloud_config_ssh] }
2019-04-09 15:33:10 -0500 commented question Zaqar can't trigger Mistral workflow.

Looks like the issue is somewhere between the Zaqar queue and the Mistral workflow execution. It's not clear whether the Zaqar notification is failing to fire or if Mistral is failing to receive it, but those would be the places to start debugging.

2019-04-04 10:59:40 -0500 answered a question RDO(Stein) supported Python version

First, a correction: upstream OpenStack is tested on Python 2.7 and 3.5 in Rocky, and Python 2.7 and 3.6 in Stein. The first Python3-only release of OpenStack will be the 'U' release.

RDO is mainly tested on Python 2.7. I believe there has been some work on Python3 support in Fedora, but I wouldn't expect it to be nearly as well-tested as 2.7 until CentOS 8 is released.

2019-03-22 08:51:12 -0500 edited question Software Deployment in Heat: Problem with os-collect-config

HI all, I have a problem with the coe cluster deploy k8s creation. On the master node on k8s i have this error:

 ***Authorization failed: Unable to establish connection to http://172.24.xx.xxx:5000/v3/auth/tokens 
Mar 21 13:47:11 k8s-gdbdfoalflcp-master-0 runc[2256]: Source [heat] Unavailable.
Mar 21 13:47:11 k8s-gdbdfoalflcp-master-0 runc[2256]: /var/lib/os-collect-config/local-data not found. Skipping***

Seems that the master node want to connect the internal url reported http://172.24.xx.xxx:5000/v3/auth/tokens.

I don't understand where is setted this parameter. i have installed manually magnum project by follow this documentation on centos queens https://docs.openstack.org/magnum/queens/install/install-rdo.html on my overcloud deployed by using tripleo

In the magnum.conf i have specified only public endpoint for uri and url [keystone authentication] section:

[DEFAULT]

transport_url = rabbit://guest:nYBy8MUw2CfAhTRwbwBY2p8sT@internal:5672//

[api]

host = publicip

[barbican_client]

[certificates]

cert_manager_type = local

[cinder]

[cinder_client]

region_name = regionOne

[cluster]

[cluster_heat]

[cluster_template]

[conductor]

[cors]

[database]

connection = mysql+pymysql://magnum:iniziale@internalip/magnum

[docker]

[docker_registry]

[drivers]

[glance_client]

[heat_client]

[keystone_auth]

[keystone_authtoken]

auth_uri = http://public:5000/v3

auth_version = v3

memcached_servers = internal:11211

project_domain_id = default

project_name = service

user_domain_id = default

password = xxxxx

username = magnum

auth_url = http://publicip:5000

auth_type = password

admin_user = magnum

admin_password = xxxxx

admin_tenant_name = service

[magnum_client]

[matchmaker_redis]

[neutron_client]

[nova_client]

[oslo_concurrency]

lock_path = /var/lib/magnum/tmp

[oslo_messaging_amqp]

[oslo_messaging_kafka]

[oslo_messaging_notifications]

driver = messagingv2

[oslo_messaging_rabbit]

[oslo_messaging_zmq]

[oslo_policy]

[profiler]

[quotas]

[trust]

trustee_domain_name = magnum

trustee_domain_admin_name = magnum_domain_admin

trustee_domain_admin_password = xxxxx

trustee_keystone_interface = public

[x509]

can anyone help me please?

2019-03-22 08:49:22 -0500 answered a question Software Deployment in Heat: Problem with os-collect-config

The problem will be in heat.conf rather than magnum.conf. You likely either have the auth_uri set to the internal IP or the endpoint_type set to Internal in the clients_keystone section.

We're looking at finding a way to ensure that in the future software deployments always get an external auth URL regardless of how Heat is configured.

2019-03-04 12:57:40 -0500 answered a question Autoscaling not working

I suspect that the server_group query in Aodh refers to an OS::Nova::ServerGroup, not to the ID of the stack. (Nova doesn't know that the servers are part of a Heat stack, and Nova events are where Gnocchi gets its measurements from.) Since there is no Server Group with that ID in Nova, the alarm is not seeing any measurements.

2019-02-27 16:21:57 -0500 commented answer Use condition to enable/disable one of the properties

That's weird, that is Nova complaining that the {get_resource: stack-server-group} part is resolving to None. Was it working before?

2019-02-26 21:08:49 -0500 received badge  Taxonomist
2019-02-19 17:00:20 -0500 answered a question Use condition to enable/disable one of the properties

Use the if macro:

          scheduler_hints:
            if:
              - cnd_enable_anti_affinity
              - group: {get_resource: stack-server-group}
              -
2019-02-17 15:31:54 -0500 answered a question Is there a way in a Heat Template to constrain a parameter to provider nets?

There doesn't appear to be one, no. The complete list of available custom constraints can be found in setup.cfg. You'd be welcome to propose a new one here.

2019-02-17 15:25:38 -0500 edited question Is there a way in a Heat Template to constrain a parameter to provider nets?

I'm trying to create a router in a heat template and want to be able to present a list of provider networks to choose from when creating the external gateway. This would be similar to the way the neutron.network custom constraint shows the networks available in the project. I tried using neutron.providernet but that doesn't seem to work.

  public_net:
    type: string
    description: ID or name of public network for which floating IP address will be allocated
    constraints:
      - custom_constraint: neutron.providernet
2019-02-12 15:58:07 -0500 answered a question HEAT Trigger with project creation

The Adjutant project is building workflows for project (i.e. tenant) management (amongst other things). That is probably the best place to implement what you're trying to do. (It is an official OpenStack project.)

The other option is to trigger something off of events. To do this as a regular user, you could trigger off an event alarm and use a Zaqar subscription to execute a Mistral workflow that creates the network environment (example in another context). However, IIUC you are speaking as a cloud operator and therefore have no need to restrict yourself to APIs a regular user can access. So you could write something to listen to oslo.notification events directly and make the API calls you want.

2019-02-12 00:33:49 -0500 answered a question HEAT Formatting list into string for load balancer pool member

You want to create a template that contains a single Server and a PoolMember. Then use the name of that template (instead of OS::Nova::Server) as the 'type' of the asg's 'resource' property. That way you'll have a scaling group where the scaled unit is a server that is a member of the load balancer pool.

2019-02-12 00:30:31 -0500 edited question HEAT Formatting list into string for load balancer pool member

FYI I am a user of a stack on the Ocata release. Hello, I have been experimenting and not been able to actually find a working solution. I am defining n node members to start my auto scaling group (I actually don't care about the asg, it's just a convenient way to spin up many nodes in a single block once just by using a parameter) and I would like to place n members into lbaas pool memberbership. The closest I can get is a retrieval of the IP addresses of all nodes as a list, however the address field in pool member wants only one string of an ip, not a list. Worst, that field won't allow any manipulation using a split. I've tried using repeat here too and it didn't work, it would just add only the last address. Code:

heat_template_version: 2017-02-24
parameters:
 ....
resources:
  asg:
    type: OS::Heat::AutoScalingGroup
    properties:
      min_size: 1
      desired_capacity: {get_param: servercount}
      max_size: {get_param: servercount}
      resource:
        type: OS::Nova::Server
        properties:
          name: perf
          flavor: { get_param: instance_type }
          image: { get_param: image_id }
          networks:
            - network: <defined>
          tags: [{get_param: environment}]
          user_data:
            str_replace:
              template: {get_file: my_instance_user_data.sh}
              params:
   ...

  lb:
    type: OS::Neutron::LBaaS::LoadBalancer
    properties:
      name: {get_param: environment}
      provider: vmwareedge
      vip_subnet: DMZLow-LS-Net
  lblistener:
    type: OS::Neutron::LBaaS::Listener
    properties:
      loadbalancer: {get_resource: lb}
      name: {get_param: environment}
      protocol: HTTPS
      protocol_port: 443
  lbpool:
    type: OS::Neutron::LBaaS::Pool
    properties:
      lb_algorithm: ROUND_ROBIN
      listener: {get_resource: lblistener}
      name: {get_param: environment}
      protocol: HTTPS

 lbpoolmember:
    type: OS::Neutron::LBaaS::PoolMember
    properties:
      address: <how do I populate n number of servers without statically copying this block? even then how do I get just one server ip here if it won't let me split from a list?>
      pool: {get_resource: lbpool}
      protocol_port: 8443
      subnet: DMZLow-LS-Net

  lbhealthcheck:
    type: OS::Neutron::LBaaS::HealthMonitor
    properties:
      delay: 5
      max_retries: 4
      pool: {get_resource: lbpool}
      timeout: 10
      type: TCP
      url_path: /status
outputs:
  server_ips:
    description: >
      This is a list of first ip addresses of the servers in the group
      for a specified network.
    value: {get_attr: [asg, outputs_list, first_address]}
2019-01-30 20:17:40 -0500 answered a question Heat snaphost

There is an unofficial tool called Flame that is designed to do this for a subset of resource types. That might get you part of the way there.

2019-01-30 01:20:15 -0500 answered a question CircularDependencyException - Accessing a property within the resource

No, there's no way to access the attributes from the same resource. Attributes aren't available until the resource is complete, while properties must be available before the resource is started. That's why you'll always get a CircularDependencyError if you try it.

One alternative is to use a SoftwareDeployment to pass the data you want to the server, since this runs after the server is created.

2019-01-30 01:04:32 -0500 answered a question overcloud undercloud

Yes.

The undercloud is a single machine (often bare-metal, but can be run in a manually managed VM) running a set of OpenStack services, including Ironic. The Red Hat product that provides this functionality is called Director, but the upstream project is TripleO. Only the operators of the main cloud interact with the undercloud.

Ironic on the undercloud is used to provision a bunch of bare-metal servers which are configured as OpenStack controller, compute, and storage nodes. This happens over the provisioning network. In recent versions of TripleO these services are deployed in containers; they are never deployed in VMs. This OpenStack installation is known as the overcloud, and it's the cloud that actual users are expected to interact with.

End-user Nova workloads run in VMs on the overcloud compute nodes.

2019-01-28 16:37:22 -0500 answered a question what is the best infrastructure for openstack ?

Generally speaking, there's no reason to run on top of Xen (and you certainly shouldn't run on top of other hypervisors, like KVM, that don't support nested virt well).

The Rackspace cloud does, I believe, run on top of Xen. This enabled them to dynamically manage their server inventory, and do stuff like e.g. deploying test clouds. If you too have spent millions of dollars on custom deployment tooling that relies on Xen as an L0 hypervisor, then you should consider it. However, if you're asking this question then you haven't and you shouldn't.

2019-01-27 17:49:16 -0500 commented answer Adding dynamic network interfaces using HEAT template

Please open a new question, but the short answer is you can use the addresses attribute of the server.

2019-01-27 17:39:37 -0500 answered a question openstack heat

In the controller. Director is just the installer for the main OpenStack cloud (though it's also based on OpenStack).

2019-01-24 14:08:28 -0500 received badge  Nice Answer (source)
2019-01-21 22:08:50 -0500 answered a question Adding dynamic network interfaces using HEAT template

Intrinsic functions (like get_resource and repeat) are evaluated from the inside out, so in this case Heat will try to resolve {get_resource: <%port%>}, which obviously fails.

Some options:

  • Don't explicitly create an OS::Neutron::Port, but instead allow OS::Nova::Server to create the ports you need.
  • Generate the template externally to Heat with only the port resources you need and reference only those in the Server network config.
  • Include all potential ports in the template and enable or disable them from the environment with conditionals, then use the if macro to choose to reference them or not in the Server network config.
2019-01-21 17:20:51 -0500 answered a question What is the best way to make condition to "openstack update stack" ?

Definitely the second one IMHO.

2019-01-01 20:28:14 -0500 answered a question Enable SELinux on KVM Guest

Fedora and CentOS both include SELinux by default and would certainly be an easier choice.

Ubuntu's default choice of MAC is AppArmor, not SELinux. The Debian wiki has instructions for setting it up that should work on Ubuntu, but it's far more likely that something has gone wrong with this process than anything to do with OpenStack. SELinux is a kernel feature that has nothing to do with the hypervisor, although as you've noted it may rely on the bootloader setting it up correctly.

I assume you're using a Cinder volume and not ephemeral storage (although IIUC even with ephemeral storage the changes to the bootloader should survive a reboot).

2019-01-01 14:21:01 -0500 edited question Aodh unable to notify scalingpolicy

We have setup openstack ocata metering with ceilometer, gnocchi and aodh with sql backend.

Alarm is getting information as well as chaning state to ok and alarm from insufficient data but aodh throws following error unable to call scalingpolicy

2018-12-28 10:45:57.612 24166 DEBUG aodh.notifier [-] Notifying alarm 51d17032-4f1b-4757-bdad-783924c73cd9 with action SplitResult(scheme=u'trust+http', netloc=u'c18177041c6b4c5a8c1fce69f5c8f89a:delete@openstack-server:8004', path=u'/v1/1017debf6f4147dcb7ea20115c132311/stacks/a1/63338a93-8133-460a-94a6-87ce267656a5/resources/scaleup_policy/signal', query='', fragment='') _handle_action /usr/lib/python2.7/site-packages/aodh/notifier/__init__.py:138
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier [-] Unable to notify alarm 51d17032-4f1b-4757-bdad-783924c73cd9
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier Traceback (most recent call last):
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier   File "/usr/lib/python2.7/site-packages/aodh/notifier/__init__.py", line 140, in _handle_action
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier     notifier.notify(action, alarm_id, alarm_name, severity,
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier   File "/usr/lib/python2.7/site-packages/aodh/notifier/trust.py", line 45, in notify
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier     headers = {'X-Auth-Token': keystone_client.get_auth_token(client)}
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier   File "/usr/lib/python2.7/site-packages/aodh/keystone_client.py", line 58, in get_auth_token
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier     return client.session.auth.get_access(client.session).auth_token
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 136, in get_access
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier     self.auth_ref = self.get_auth_ref(session)
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/generic/base.py", line 198, in get_auth_ref
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier     return self._plugin.get_auth_ref(session, **kwargs)
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier   File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/v3/base.py", line 167, in get_auth_ref
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier     authenticated=False, log=False, **rkwargs)
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier   File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 545, in post
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier     return self.request(url, 'POST', **kwargs)
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier   File "/usr/lib/python2.7/site-packages/positional/__init__.py", line 101, in inner
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier     return wrapped(*args, **kwargs)
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier   File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 445, in request
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier     raise exceptions.from_response(resp, method, url)
2018-12-28 10:45:57.827 24166 ERROR aodh.notifier BadRequest: Expecting to find domain in user. The server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400) (Request-ID: req-1f4c3cd8-3672-4286-990a-15c8f620f01a)
2018-12-28 10:45:57 ...
(more)
2018-12-12 19:51:25 -0500 commented question [solved in Py3.7] Rocky openstackclient: issubclass() arg 1 must be a class

I think if you run the command with the --debug flag (i.e. openstack --debug network create ...) then it should show you the stack trace that will reveal where the problem is.

2018-12-06 21:38:42 -0500 answered a question I want to get involved in OpenStack but don't know what I want to work on.

The Technical Committee maintains a 'Help most needed' list.

2018-11-29 01:57:59 -0500 received badge  Nice Answer (source)
2018-11-28 10:50:03 -0500 answered a question Type in ResourceGroup and get_param

The problem here is actually on the client side. You can get the file name from a parameter, but the client isn't smart enough to automatically upload it for you like it usually does.

Heat won't allow you to reference local files on the Heat server, and the Heat server can't access local files on the client, for hopefully obvious reasons. It would work if you used an http/https URL that is accessible to the server (e.g. you could upload the template file to Swift).

A better way is probably to map the file in the environment using a custom type name, instead of in the parameters. e.g. your template would be something like:

 resources:
    cluster_nodes:
      type: OS::Heat::ResourceGroup
      properties:
        count: 5
        resource_def:      
           type: My::Custom::Type

and you'd pass an environment file like:

resource_registry:
  My::Custom::Type: file.yaml

Then the client will know that it needs to upload the file for you, and you can switch to a different file by passing a different environment.

Finally, If you're using the Rocky release (or later) you can upload all of the files to a Swift container, and instantiate the main template from there. Heat will download all of the files in the container, so you can select which one you want with a parameter and it will always be available.

2018-11-28 10:35:06 -0500 commented question HEAT creating multiple servers

So what you're asking for is a way to create 100 stacks from the same template, each with a different environment file? The short answer is no, but depending on what you are setting in the environment files, there may be a way to do it within a template using one of the scaling group resource types.

2018-11-27 23:46:45 -0500 received badge  Necromancer (source)