Ask Your Question

pas-ha's profile - activity

2020-07-07 02:39:48 -0600 commented question Simple Step-by-Step Guide to Install All OpenStack Services Manually

not sure it will be enough of 'step-by-step' configuration, but Matthew Treinish (a veteran of OpenStack community) wrote couple of years ago a blog post where he installs OpenStack manually, from official tarballs (not even PyPI), trying to follow official docs only https://blog.kortar.org/?p=380

2020-05-22 03:30:52 -0600 answered a question 404 on authentication error

Looks like the fernet key that was used to encrypt the token when Keystone had issued it has been already rotated out and/or is simply not present on a keystone node that attempts to validate this token. In effect, Keystone can not decrypt the token as the decryption produces garbage which is "not a recognized Fernet token" as error says.

So I suggest you:

  • check that your keys are properly distributed after they have been rotated, see https://docs.openstack.org/keystone/l...
  • check the number of fernet keys and how frequently are they rotated vs how long your token expiration is to ensure fernet keys are not rotated out too soon. Also you have to factor in service auth if it is enabled (which allows to validate expired tokens for some additional time). See https://docs.openstack.org/keystone/l... for more details
2020-05-14 02:08:30 -0600 commented question token is not getting generated from keystone

The port 35357 was historically dedicated to Keystone's admin endpoint. AFAIU with the drop of Keystone v2 support there's now no need for it (everything formerly 'admin'-specific can be performed on standard endpoints), so installations are free to not deploy a separate admin endpoint at all.

2020-05-14 01:59:53 -0600 answered a question Can I make API calls using Python SDK?

Yes, with openstacksdk you can make "raw" requests, like:

import openstack
cloud = openstack.connect(cloud="your cloud name from clouds.yaml")
response = cloud.compute.get("/servers")
servers = response.json()["servers"]

The "Connection" object has various 'services' (like "compute" or "identity") that behave much like requests' Sessions already mounted to the appropriate (versioned) API endpoint from the keystone catalog, plus more convenience methods that wrap the raw requests ones.

2020-04-13 02:29:52 -0600 answered a question Release of Golang SDK for OpenStack

You have missed that rackspace/gophercloud was migrated (https://github.com/rackspace/gophercl...) to a separate org:

https://github.com/gophercloud/gopher...

That one is actively maintained (at the time of this writing, fresh version 0.10.0 was just released) and AFAIK is the one used e.g. in OpenStack cloud provider for Kubernetes.

2020-01-27 01:59:44 -0600 commented answer Freezer API can't connect to Keystone v3 API

I'm not sure what identity_uri option is in freezer, but the problem seems to be during the step when keystonemiddleware tries to validate the incoming token - and tries to go to v2 Keystone for that. Check the values of auth_url and auth_version in the [keystone_authtoken] config section.

2019-12-10 07:17:13 -0600 commented answer How to avoid accidental removal of the stack with the heat stack-delete

I just found today this feature in AWS CFN: https://docs.aws.amazon.com/AWSCloudF... may be worth implementing smth like that in Heat as well

2016-12-21 02:19:58 -0600 answered a question ironic stand alone

Hi Rick,

it all depends on what do you need from your Ironic installation. Using Ironic as integrated part of OpenStack brings its own benefits like deploying Nova instances to baremetal transparently for the user and using Neutron for network isolation. The currently "required" services to use Ironic as part of OpenStack are [0]:

  • Keystone
  • Neutron
  • Glance
    • Swift (depending on ironic drivers in use)
  • Nova for most non-admin user interactions, as in integrated case Ironic is basically admin-only service, all normal users deploy to baremetal via Nova

On the other hand, Ironic can work without any of the above - the so called standalone case [1]. It basically requires only a reachable storage for images, a DHCP service for the baremetal nodes, and a TFTP/HTTP server for (i)PXE-booting the nodes, no other OpenStack components. In the OpenStack ecosystem, this is useful for TripleO-style seed cloud installations.

For Ironic standalone, please also check out the community "bifrost" [2] project, which is a set of Ansible playbooks to install standalone ironic, enroll nodes into it and deploy them with an image.

[0] http://docs.openstack.org/project-ins...

[1] http://docs.openstack.org/project-ins...

[2] http://docs.openstack.org/developer/b...

2015-04-11 16:37:54 -0600 answered a question can devstack be built for automated instances as a testbed?

wearing my Heat hat, I could say that all this looks possible to do with nicely crafted Heat template, except that particular scaling scenario (count number of different IP connects). With Heat you can make autoscaling groups that are placed behind a load balancer, and scaled based on a Ceilometer alarm which can be triggered on any available Ceilometer metric, and usually people go with CPU load metric for that :)

As for having a custom image - you can do that (preinstall all the basic stuff you need on the image, upload it to Glance and use in your Heat template), or you could make your template more generic by using Heat's software configuration capabilities (OS::Heat::CloudConfig or/and SoftwareConfig/SoftwareDeployment resources). By doing that you will make your template (and your application) much more portable between different OpenStack deployments, but you will trade off the time needed for VMs to spin up (as all the stuff needed must be downloaded and installed at boot).

So I advice you to take a closer look at Heat for your app orchestration/installation, and at Ceilometer, to get an idea on what metrics you could use to scale your app.

2015-04-11 16:20:47 -0600 received badge  Editor (source)
2015-04-11 16:19:34 -0600 answered a question Heat Template installation are not happening

1) boot a simplest cirros image (it has a local login and password configured, you can access it via vnc console from the Horizon), log into it and try pinging say Google by IP or name, if it works it is ok (most package managers work with names, not with IPs, so this check is important). If some part of the check fails - you have to figure out what is the routing or DNS lookup problem in your DC (or are you on DevStack? there is a specific trick for that). To access the VM via SSH generally you need a floating IP on it, and a security group that is assigned to this instance must allow incoming traffic on ssh port.

2) About user_data - you do not specify the "user_data_format" property in the server template. The default in Heat is HEAT_CFNTOOLS, which accepts data only in a very specific, CFN CloudInit-compatible structured data format (Heat can provide it via "AWS::CloudFormation::Init" section in the metadata). If you want to pass raw shell scripts to the instance, you should use "user_data_format: RAW".

3) generally you do not need a cfntools-enabled image, only an image with cloud-init (most of stock cloud images like Ubuntu, Fedora etc are such), and see my previous answer, use "user_data_format: RAW" in your templates. The cloud images are generally to be accessed via ssh, with the key that you have created in /uploaded to Nova and specified to be put on instance with nova boot or in the Heat template, with user name depending on the actual image used.

2015-04-11 15:45:04 -0600 answered a question heat stack-create - ERROR: The template section is invalid: NetID

BTW, you can use the names instead of UUIDs for those "image" and "networks: -network" properties, which would save you couple of nova outputs parses.

2015-03-17 00:41:50 -0600 answered a question Can haproxy be deployed on one vm of tenant?

Not sure why would you want haproxy on a VM for OpenStack LBaaS. There is a dedicated Neutron LBaaS service for that. If you can not use Neutron and only have Nova-network, you can use Heat to deploy AWS::ElasticLoadBalancing::LoadBalancer resource (that under the hood is a VM with haproxy) to balance traffic for some VMs of the tenant, but that would not be an OpenStack LBaaS - no API, only inside a single tenant etc.

2015-02-25 10:39:21 -0600 answered a question Openstack orchestration of OpenShift without heat

I am also not familiar with OpenShift, but whatever is done with Heat can be achieved by manually issuing various OpenStack API calls (via CLI client utilities or HTTP requests directly) in correct order, waiting for actual resources to become fully created. Heat just does this heavy lifting for you. If you really can not use Heat, then you can examine the templates in https://github.com/openstack/heat-tem... to get an idea of what must be created and create all this infrastructure yourself.

Btw, what do you mean that Keystone is not working?

2015-01-26 14:25:43 -0600 received badge  Nice Answer (source)
2015-01-19 09:37:03 -0600 received badge  Necromancer (source)
2015-01-19 06:16:35 -0600 answered a question Openshift-Origin via Heat Icehouse | BrokerWaitCondition | AWS::CloudFormation::WaitCondition | CREATE_FAILED

I presume the template you were using is this one (they were moved a little since) https://github.com/openstack/heat-tem...

that kind of error with the Instance successfully created most probably means that the init-script failed, did not run to its end enough to signal the success back (this particluar line in the template was not reached https://github.com/openstack/heat-tem... )

can you try to boot the instance with the same params manually, login there and repeat the steps from the init-script by hand? As I see there is some jumping around SELinux in the script, I suspect that something around it might be a culprit (e.g. updated selinux-policies with newer policies that block some additional operations).

2015-01-19 06:05:45 -0600 answered a question Different Deployment model in heat

I'm affraid that not going to work with Heat currently - the template language is strictly declarative, so there is no possibility to have an if..else type of logic in the Heat template. You can take a look at Mistral though - it is Workflow-as-a-service for OpenStack, so its WorkFlows can fit your bill, plus Heat comminuty is in the progress of adding Mistral-specific resources for Heat.

2015-01-19 04:05:25 -0600 answered a question GroupAntiAffinityFilter Heat

It seems that Nova Server Group resource is what you need: http://docs.openstack.org/developer/h...

It has a policy property, dafaulting to anti-affinity.

This resource type is pretty new (available since Juno) and might be underdeveloped yet, so if you have ane comments on features missing or find any bugs with it, please raise your concerns via Launchpad bugs for Heat.

2014-12-05 02:54:37 -0600 answered a question Devstack not starting because of "urllib3.connectionpool HttpConnectionPool is full, discarding connection"

Yesterday I stumbled on what looks like the same issue on DevStack on KVM, and here is what I found:

In my case it bolied down to the order in which services are started by DevStack and the state of your VM.

I have enabled Swift in my local.conf. In that case in seems Glance by default uses Swift as a backend for images, so Swift starts before Glance. But in my case s-proxy service failed to start because there was Ceilometer installed before and its files were discoverable by Paste's WSGI, but Ceilometer was not started (installed all dependencies) yet:

/opt/stack/swift/bin/swift-proxy-server /etc/swift/proxy-server.conf -v
Traceback (most recent call last):
  File "/opt/stack/swift/bin/swift-proxy-server", line 23, in <module>
    sys.exit(run_wsgi(conf_file, 'proxy-server', **options))
  File "/opt/stack/swift/swift/common/wsgi.py", line 445, in run_wsgi
    loadapp(conf_path, global_conf=global_conf)
  File "/opt/stack/swift/swift/common/wsgi.py", line 354, in loadapp
    ctx = loadcontext(loadwsgi.APP, conf_file, global_conf=global_conf)
  File "/opt/stack/swift/swift/common/wsgi.py", line 338, in loadcontext
    global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
    global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 320, in _loadconfig
    return loader.get_context(object_type, name, global_conf)
  File "/opt/stack/swift/swift/common/wsgi.py", line 61, in get_context
    object_type, name=name, global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 450, in get_context
    global_additions=global_additions)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 562, in _pipeline_app_context
    for name in pipeline[:-1]]
  File "/opt/stack/swift/swift/common/wsgi.py", line 61, in get_context
    object_type, name=name, global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 454, in get_context
    section)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 476, in _context_from_use
    object_type, name=use, global_conf=global_conf)
  File "/opt/stack/swift/swift/common/wsgi.py", line 61, in get_context
    object_type, name=name, global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 406, in get_context
    global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 296, in loadcontext
    global_conf=global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 328, in _loadegg
    return loader.get_context(object_type, name, global_conf)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 620, in get_context
    object_type, name=name)
  File "/usr/lib/python2.7/dist-packages/paste/deploy/loadwsgi.py", line 646, in find_egg_entry_point
    possible.append((entry.load(), protocol, entry.name))
  File "/usr/local/lib/python2.7/dist-packages/pkg_resources.py", line 2184, in load
    ['__name__'])
  File "/opt/stack/ceilometer/ceilometer/objectstore/swift_middleware.py", line 52, in <module>
    from ceilometer import service
  File "/opt/stack/ceilometer/ceilometer/service.py", line 29, in <module>
    from ceilometer import utils
  File "/opt/stack/ceilometer/ceilometer/utils.py", line 33, in <module>
    from oslo_concurrency import processutils
ImportError: No module named oslo_concurrency
s-proxy failed to start

Thus Glance ... (more)

2014-11-18 04:47:53 -0600 answered a question Cannot create DB table when installing heat

This is (just) a warning, and DB should be operational nevertheless. The LP bug you mention fixed this for MySQL 5.6 where such warnings became errors. Please check that Heat is working - it should be regardless of that warning.

2014-09-15 11:38:46 -0600 received badge  Nice Answer (source)
2014-09-12 15:28:53 -0600 answered a question Can I define an auto-scaling Heat template that does the scaling at specific times of day?

I think it is pretty possible - see https://github.com/openstack/heat-templates/blob/master/cfn/aws_asg.template (https://github.com/openstack/heat-tem...) as a starting point.

What you need to do is setup the cron job inside your vm that will post to the webhook URLs of appropriate SaclingPolicy. But that might multiply VMs (as every new VM will post to the same hook so the scaling might be multiplied, either up or down, so set your limits accordingly). Thus may be it would be better to set up the autoscaling group and one separate "controller" instance that will post to the scaling webhooks based on the cron job.

2014-08-13 17:11:09 -0600 received badge  Teacher (source)