Ask Your Question

sgordon's profile - activity

2018-03-05 04:20:29 -0500 received badge  Good Answer (source)
2017-07-05 10:31:56 -0500 commented answer Not able to SSH to newly created instance with .pem key

Yes, unfortunately the user varies based on the guest image used.

2017-05-07 14:38:55 -0500 received badge  Nice Answer (source)
2017-05-07 14:10:47 -0500 answered a question Mirantis OpenStack vs. Cloud Platform

Disclaimer: I work in product management at Red Hat, I would suggest reaching out to Mirantis directly to ask questions regarding this topic.

Now that we have that out of the way, my understanding is that Mirantis OpenStack continues to be a component of Mirantis Cloud Platform and will not be available separately for new customers, instead you consume it as part of MCP. The name change is the result of the other things being added (primarily StackLight and Kubernetes) as well as the significant changes to the way they are delivering/deploying/managing it (moving away from FUEL towards a CI/CD approach driven by a tool called Drivetrain).

2017-03-06 08:39:26 -0500 commented answer After creating an VM No valid host was found.

As of ~ Liberty I think it gives you enough info in the scheduler logging even if you don't have DEBUG logging enabled thanks to

2017-02-17 14:25:59 -0500 answered a question Openshift can run on Openstack ?

Yes, both OpenShift Origin and OpenShift Container Platform can be deployed on OpenStack. There is a reference architecture as well as a set of Heat templates to automate the process.

image description

Behind the scenes this uses the Kubernetes cloud provider implementation for OpenStack, the reference architecture and Heat templates focus mostly on automating the provisioning of the required resources and pointing the openshift-ansible installer at them.

2017-02-17 13:23:49 -0500 commented answer why shared cpu policy schedule all vcpu on same host cpu?

Yeah, I also went back and added a bit more detail relating back to this specific example.

2017-02-17 08:03:27 -0500 answered a question why shared cpu policy schedule all vcpu on same host cpu?

Assuming you aren't running real-time guests, use systemd's CPUAffinity setting instead of isolcpus. It works the opposite way to isolcpus though instead of specifying the cores where you don't want host processes to run, you specify the cores where you do want them to run. You still need to use vcpu_pin_set in the same way as before.

There's not a heap of documentation out there on using the CPUAffinity setting but you can find it in the systemd configuration man page:

Relating to your example if I want to force host processes to 0 and 1:

CPUAffinity=0 1

Then in my /etc/nova/nova.conf I tell it to put the guests on the other cores:


Take note that unlike the isolcpus case I have to use different ranges for the two settings as they in effect mean the inverse of each other.

Unfortunately CPUAffinity does not restrict kernel threads themselves, but it does avoid some of the real-time focused side-effects of isolcpus and is the best way of handling this for now as far as I am aware.

2017-02-16 09:04:01 -0500 commented answer How can I connect to the spice console with client e.g. spicy?

The point of the proxy is that port 5901-5999 should not be opened to the world in iptables.

2017-02-09 09:49:06 -0500 answered a question Upgrading directly from Liberty to Newton ??

Upstream development and testing focuses on support for N->N+1 upgrades, e.g. Liberty->Mitaka, Mitaka->Newton, particularly when it comes to "rolling" upgrades. To orchestrate a Liberty->Newton upgrade you must apply any schema migrations, data migrations, configuration changes from the Liberty->Mitaka upgrade, they can not be skipped.

The exception is if you are running the services themselves in virtual machines, as is the case in VIO, but this is a quite different pattern and introduces different challenges.

2017-02-09 07:29:50 -0500 received badge  Nice Answer (source)
2017-02-08 17:38:05 -0500 answered a question OpenStack offline installation

You will need to create an offline copy of the repositories from an Internet connected machine, install the centos-release-openstack-newton package on that machine to handle repo configuration as indicated in the QuickStart, or configure it to point at the repos manually.

Install reposync then use it to download all packages to the local directory, for more information on reposync refer to the man page. Once you have done this you can copy the files to your offline machine via sneaker net (e.g. USB drive) and configure it to look at the repositories by creating a /etc/yum.repos.d/local.repo file with entries for each of the repositories. The format is along the lines of:


This is the easiest way to do it, if you can't get a CentOS or RHEL box that is Internet connected then you can re-produce the same effect using recursive wget etc. but it gets pretty messy.

2017-02-06 13:14:55 -0500 answered a question where to find devstack code

The primary way of getting and using DevStack is from source, not from binaries or packages. The source for DevStack is available here:

Information on getting started with DevStack is available here:

2017-02-05 17:57:41 -0500 answered a question Change login web page

The developer reference for Horizon includes a section on customization:

2017-02-04 15:43:10 -0500 commented question I am trying to gather some cpu metrics from nova instances on devstack newton.

Is it possible that there is a firewall running in the guest itself? Is other connectivity (e.g. SSH) working end to end?

2017-02-04 10:52:15 -0500 edited answer How to increase volume to running VM (without powering off)?

According to the user guide, you need to detach a volume before resizing it. You don't have to shut the instance down, though.

If you wish to resize other aspects of the guest (e.g. change the flavor to increase CPU/RAM available) then you need to perform a nova resize on the guest. This operation requires the guest to be shutdown, though a proposal to provide a "live" version of this in a future release exists.

2017-02-04 10:44:20 -0500 answered a question how to change the location of qemu-kvm emulator

The CentOS Virtualization Special Interest Group (SIG) provides qemu-kvm-ev-2.6.0 builds, including dependencies, in this repository:

Alternatively subscribers to Red Hat OpenStack Platform and Red Hat Enterprise Virtualization will find qemu-kvm-rhev-2.6.0 in the base repositories for those offerings.

I would recommend installing whichever of the above you are able to access, everything should "just work". It will replace the base qemu-kvm package and be called in its place. No symlinks etc. required.

2017-01-03 09:58:09 -0500 edited answer What are the minimum hardware requirements of OpenStack?

There are many ways to split out an OpenStack deployment but two box deployments typically consist of:

  1. A controller node.
  2. A compute node.

For learning purposes it's actually possible to deploy an entire OpenStack installation on a single system if necessary, utilities like Red Hat's PackStack make this extremely easy.

This diagram from the OpenStack documentation actually illustrates a simple deployment but the networking components running on the controller node could easily be moved to a dedicated networking node and so on:

image description

In summary the controller node will typically host:

  • Keystone (for authentication).
  • Glance (for image storage).
  • Cinder (for block storage).
  • Neutron (for networking - the nova-network service may be used instead.
  • Nova API, conductor, and scheduling services.
  • Supporting services like the message broker (Rabbit MQ or Qpid) and database (MySQL or PostgreSQL).

The compute node will typically host:

  • Nova compute.
  • A networking agent, if using Neutron.

To provide additional block storage in a small environment you may also choose to deploy cinder-volume on the compute node or completely separate block storage nodes. As I said in the introduction there are many ways to split this up. You may also choose to run nova-compute on the controller itself to allow you to run virtual machines on both hosts in a small environment (otherwise virtual machines will only actually run on the compute node).

To expand such an environment you would add additional compute nodes, a separate networking node, and eventually a second and third controller for high availability. You might also split out storage to dedicated nodes.

The OpenStack Basic Installation guides provide some guidance on getting a basic 2-3 node deployment installed and running:

For RHEL/Fedora installation you could also use PackStack and the --install-hosts argument to streamline this type of deployment.

2017-01-03 09:56:12 -0500 answered a question Does Openstack community consider to take ownership of the Flocker project? (since clusterHQ suppressing shutdown)

The process for adding a project to OpenStack is discussed in the Project Creator's Guide. A person or group of people who wanted to continue to work on Flocker would need to go through this process for it to become part of OpenStack, as well as clarifying whether this was a continuation of the current project or a fork thereof.

2016-12-30 10:25:06 -0500 commented question cirros instance hang at boot time, console show"Starting up"

Unfortunately those versions look correct, so there goes my theory... :)

2016-12-29 10:41:53 -0500 commented question cirros instance hang at boot time, console show"Starting up"

What versions of Libvirt and QEMU are installed? What is the output of rpm -qi qemu-kvm and rpm -qi libvirt?

2016-12-28 12:17:58 -0500 commented answer Question about overcommit of cpu and ram

These are both considered out of scope for Nova at this time, though there are other OpenStack projects which attempt to provide some such facilities by exercising the Nova APIs.

2016-12-19 10:26:25 -0500 commented answer Instance Error, caused by libvirtError: internal error no supported architecture for os type 'hvm ??

@kashyapk as @RainFlying mentions this usually occurs because the virtualization extensions are disabled in the BIOS, this is often the default particularly on workstation/desktop hardware. The other thing to check is whether the kvm kernel module is available and loaded.

2016-12-19 10:20:35 -0500 answered a question Question about overcommit of cpu and ram

At first the system will attempt to use swap if available before eventually killing processes due to Out of Memory (OOM), so the compute node itself will not crash if it reaches a full memory load but individual processes including the qemu-kvm processes that represent the guests can and will.

Techniques like memory ballooning can help alleviate such issues but they are not currently utilized by OpenStack Compute (Nova). Kernel Samepage Merging (KSM) is another technique that can help, albeit at the expense of some CPU usage, and can be used with Nova. Refer to for more information about KSM.

Neither of these techniques can completely eliminate the OOM killer if you overcommit too heavily, it is important to set the overcommit ratios after careful consideration of the real characteristics of your workload and environment.

You can find some more information about hypervisor tuning regarding memory at

2016-12-17 11:21:55 -0500 edited answer Does anyone have ansible scripts for installing Openstack Mitaka

Maybe you could consider that in openstack-ansible there is is_metal configuration parameter, which would allow to use non-container deployment?

2016-12-17 11:17:50 -0500 commented question why not github and stackoverflow

My recollection is that when Openstack was on trial in Stackoverflow's Area 51 there was not enough interest captured and it was rejected, that led fairly directly to the use of Askbot instead.

2016-12-17 10:52:17 -0500 commented question Getting error "No valid host was found. There are not enough hosts available" while provisioning in Newton installed through RDO

We will need some more information from the nova-scheduler log file, there should be a line associated with the NoValidHost error which indicates exactly which filter failed out the host(s). Also, are you expecting that hardware virt is present? Check /proc/cpuinfo for svm or vmx.

2016-12-06 09:56:16 -0500 commented question Hypervisor Hostname can be same for many VM's

Primarily because that's not a cloud workflow, it's a virtualization workflow, but as noted in the other response there are working arounds including affinity/anti-affinity and availability zone.

2016-11-11 19:40:41 -0500 commented question packstack-openstack installation on Centos 7 Error - Newton

What does systemctl status openstack-nova-cert.service show? The journal output doesn't appear to relate to the actual error.

2016-11-09 17:20:01 -0500 commented answer How to change the polling period of ceilometer?

Here's an example from the installation guide, you may need to modify on 14.04 as I can not recall if it was still using upstart or not:

2016-11-09 17:12:31 -0500 edited answer RDO kilo installation fails in mullti-node setup

I would suggest using a recent version, at this moment supported versions by the community are Mitaka and Newton. Kilo is deprecated and no longer maintained. There is a lack of documentation on this, but puppet in RDO has contrail integration in the Newton branch:

2016-11-09 17:06:12 -0500 commented question cinder-scheduler and cinder-backup are in down state

You will need to provide Cinder logs, the provided information is not enough to establish a diagnosis other than "try enable them".

2016-11-09 17:03:50 -0500 answered a question Swift Compatibility Certification

The OpenStack Interoperability program governs the use of the OpenStack logos and marks. It includes the OpenStack Powered Object Storage program. The actual tests are executed using Tempest and RefStack.

You can find out which Tempest tests are actually required for a given compatibility program by going to the RefStack guidelines and selecting e.g. "OpenStack Powered Object Storage".

2016-08-15 11:40:42 -0500 commented question How to setup partitions for ironic baremetal images in tripleo?

Have you already attempted using disk-image-create per the Ironic docs: (see Image requirements section).

2016-08-03 07:25:37 -0500 answered a question Opensuse openstack-nova-compute package

I wouldn't expect this to work if you are using the OpenStack packages provided by two different distros, potentially even if you manage to align the version numbers of the packages. The reason being that different distributions carry different levels of backports which more often than not result in differences in the RPC communication between nodes leaving them unable to communicate.

Solutions that combine compute nodes of multiple operating systems do this by packaging their OpenStack pieces and injecting them into the host rather than using that operating systems default OpenStack distro.

2016-06-10 05:03:10 -0500 received badge  Guru (source)
2016-06-10 05:03:10 -0500 received badge  Great Answer (source)
2016-04-25 12:25:53 -0500 edited question neutron-linuxbridge-agent Stuck in Loop - RTNETLINK answers: Operation not supported

I've setup a Openstack following the Ubuntu installation guide. It looks like I have a Neutron issue on my compute node. It's stuck at 100% CPU, looping continuously over the following. Judging by the 'RTNETLINK answers: Operation not supported', it's trying to do something to the network interfaces, but can't - something to do with the vxlan. I've tried rebooting the node, and tracing through the code, but I'm not having much luck.

 2016-04-25 00:04:26.613 17806 INFO neutron.common.config [-] Logging enabled!
2016-04-25 00:04:26.614 17806 INFO neutron.common.config [-] /usr/bin/neutron-linuxbridge-agent version 8.0.0
2016-04-25 00:04:26.614 17806 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface mappings: {'provider': 'eth0'}
2016-04-25 00:04:26.615 17806 INFO neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Bridge mappings: {}
2016-04-25 00:04:26.810 17806 ERROR neutron.agent.linux.utils [-] Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Operation not supported

2016-04-25 00:04:26.828 17806 CRITICAL neutron [-] RuntimeError: Exit code: 2; Stdin: ; Stdout: ; Stderr: RTNETLINK answers: Operation not supported

2016-04-25 00:04:26.828 17806 ERROR neutron Traceback (most recent call last):
2016-04-25 00:04:26.828 17806 ERROR neutron   File "/usr/bin/neutron-linuxbridge-agent", line 10, in <module>
2016-04-25 00:04:26.828 17806 ERROR neutron     sys.exit(main())
2016-04-25 00:04:26.828 17806 ERROR neutron   File "/usr/lib/python2.7/dist-packages/neutron/cmd/eventlet/plugins/", line 21, in main
2016-04-25 00:04:26.828 17806 ERROR neutron     agent_main.main()
2016-04-25 00:04:26.828 17806 ERROR neutron   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/", line 888, in main
2016-04-25 00:04:26.828 17806 ERROR neutron     manager = LinuxBridgeManager(bridge_mappings, interface_mappings)
2016-04-25 00:04:26.828 17806 ERROR neutron   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/", line 76, in __init__
2016-04-25 00:04:26.828 17806 ERROR neutron     self.check_vxlan_support()
2016-04-25 00:04:26.828 17806 ERROR neutron   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/", line 618, in check_vxlan_support
2016-04-25 00:04:26.828 17806 ERROR neutron     if self.vxlan_ucast_supported():
2016-04-25 00:04:26.828 17806 ERROR neutron   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/", line 581, in vxlan_ucast_supported
2016-04-25 00:04:26.828 17806 ERROR neutron     test_iface = self.ensure_vxlan(seg_id)
2016-04-25 00:04:26.828 17806 ERROR neutron   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/drivers/linuxbridge/agent/", line 286, in ensure_vxlan
2016-04-25 00:04:26.828 17806 ERROR neutron     return None
2016-04-25 00:04:26.828 17806 ERROR neutron   File "/usr/lib/python2.7/dist-packages/oslo_utils/", line 220, in __exit__
2016-04-25 00:04:26.828 17806 ERROR neutron     self.force_reraise()
2016-04-25 00:04:26.828 17806 ERROR neutron   File "/usr/lib/python2.7/dist-packages/oslo_utils/", line 196, in force_reraise
2016-04-25 00:04:26.828 17806 ...