Ask Your Question

larsks's profile - activity

2017-07-04 06:00:36 -0500 received badge  Nice Answer (source)
2017-05-14 05:55:39 -0500 received badge  Good Answer (source)
2017-04-06 16:21:15 -0500 received badge  Good Answer (source)
2017-03-06 19:11:42 -0500 received badge  Good Answer (source)
2017-01-05 16:48:25 -0500 received badge  Great Answer (source)
2017-01-05 16:48:25 -0500 received badge  Guru (source)
2016-12-14 13:14:52 -0500 received badge  Great Answer (source)
2016-12-14 13:14:52 -0500 received badge  Guru (source)
2016-10-23 10:23:22 -0500 received badge  Necromancer (source)
2016-05-03 12:08:27 -0500 commented question hypervisor_type = QEMU, but virt_type=kvm

I fixed that for you.

2016-05-03 12:08:01 -0500 received badge  Civic Duty (source)
2016-05-03 12:07:48 -0500 edited question hypervisor_type = QEMU, but virt_type=kvm

Hi everybody,

I have the following question, I have configured in the computes to use kvm (/etc/nova/nova-compute.conf):

virt_type=kvm

But when I execute the following command, the output says it is using QEMU:

# nova hypervisor-show 5 | grep hypervisor_type
| hypervisor_type           | QEMU

OpenStack version in Kilo.

Thanks in advance.

* Update in response to larsks *

I haven't changed anything since the installation and the start of the services.

# virsh capabilities | grep kvm
<baselabel type='kvm'>+0:+0</baselabel>
<domain type='kvm'>
<domain type='kvm'>

But I found something:

#  virt-host-validate
QEMU: Checking for hardware virtualization                                 : PASS
QEMU: Checking for device /dev/kvm                                         : PASS
QEMU: Checking for device /dev/vhost-net                                   : PASS
QEMU: Checking for device /dev/net/tun                                     : PASS
LXC: Checking for Linux >= 2.6.26                                         : PASS

And when I get the "virsh dumpxml" of an instance:

<domain type='kvm' id='44'>
....
<emulator>/usr/bin/qemu-system-x86_64</emulator>

Is that the problem ? If yes, how can I force it to use qemu-kvm.

2016-05-03 12:06:55 -0500 commented answer hypervisor_type = QEMU, but virt_type=kvm

If you need to add additional information, update your question, don't post part of your question as an answer. It's confusing and it will result in your question being ignored (because people will see that is has an answer).

2016-05-03 11:13:46 -0500 commented question hypervisor_type = QEMU, but virt_type=kvm

Did you restart nova-compute after making this change? Does virsh capabilities show a kvm domain for the x86_64 arch?

2016-05-03 08:26:04 -0500 edited question Unable to start Corosync Cluster Engine

I'm trying to create HA OpenStack cluster for controller nodes by following http://docs.openstack.org/ha-guide/ (OpenStack HA-guide).

So I have three nodes in cluster:

  • controller-0
  • controller-1
  • controller-2

Setted up a password for hacluster user on each host.

[root@controller-0 ~]# yum install pacemaker pcs corosync libqb fence-agents-all resource-agents –y ;

Authenticated in all nodes using password which should make up the cluster

[root@controller-0 ~]# pcs cluster auth controller-0 controller-1 controller-2 -u hacluster -p password --force  
controller-2: Authorized
controller-1: Authorized
controller-0: Authorized

After that created cluster:

[root@controller-1 ~]# pcs cluster setup --force --name ha-controller controller-0 controller-1 controller-2
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller-0: Succeeded
controller-1: Succeeded
controller-2: Succeeded
Synchronizing pcsd certificates on nodes controller-0, controller-1 controller-2...
controller-2: Success
controller-1: Success
controller-0: Success
Restaring pcsd on the nodes in order to reload the certificates...
controller-2: Success
controller-1: Success
controller-0: Success

Started cluster:

[root@controller-0 ~]# pcs cluster start --all
controller-0:
controller-2:
controller-1:

But when I start corosync, I get:

[root@controller-0 ~]# systemctl start corosync
Job for corosync.service failed because the control process exited with error code. 
See "systemctl status corosync.service" and "journalctl -xe" for details.

In message log:

controller-0 systemd: Starting Corosync Cluster Engine...
controller-0 corosync[23538]: [MAIN  ] Corosync Cluster Engine ('2.3.4'): started and ready to provide service.
controller-0 corosync[23538]: [MAIN  ] Corosync built-in features: dbus systemd xmlconf snmp pie relro bindnow
controller-0 corosync[23539]: [TOTEM ] Initializing transport (UDP/IP Unicast).
controller-0 corosync[23539]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
controller-0 corosync: Starting Corosync Cluster Engine (corosync): [FAILED]
controller-0 systemd: corosync.service: control process exited, code=exited status=1
controller-0 systemd: Failed to start Corosync Cluster Engine.
controller-0 systemd: Unit corosync.service entered failed state.
controller-0 systemd: corosync.service failed.

My corosync config file:

[root@controller-0 ~]# cat /etc/corosync/corosync.conf    
totem {   
    version: 2    
    secauth: off    
    cluster_name: ha-controller    
    transport: udpu    
}    
nodelist {    
    node {    
        ring0_addr: controller-0    
        nodeid: 1     
    }
    node {
        ring0_addr: controller-1
        nodeid: 2
    }
    node {
        ring0_addr: controller-2
        nodeid: 3
    }
}
quorum {
    provider: corosync_votequorum
    expected_votes: 3
    wait_for_all: 1
    last_man_standing: 1
    last_man_standing_window: 10000
}
logging {
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    to_syslog: yes
}

Also all names are resolvable

OS is CentOS Linux release 7.2.1511 (Core)

[root@controller-0 ~]# uname -a
Linux controller-0 3.10.0-327.13.1.el7.x86_64 #1 SMP Thu Mar 31 16:04:38 UTC 2016 x86_64 x86_64 x86_64

GNU/Linux Installed versions:

pacemaker.x86_64                1.1.13-10.el7_2.2   @updates
pacemaker-cli.x86_64            1.1.13-10.el7_2.2   @updates
pacemaker-cluster-libs.x86_64   1.1.13-10.el7_2.2   @updates
pacemaker-libs.x86_64           1.1.13-10.el7_2.2   @updates
corosync.x86_64                 2.3.4-7.el7_2.1     @updates
corosynclib.x86_64              2.3.4-7.el7_2.1     @updates
libqb.x86_64                    0.17.1-2.el7.1      @updates
fence-agents-all.x86_64         4.0.11-27.el7_2.7   @updates
resource-agents.x86_64          3.9.5-54.el7_2.9    @updates

UPDATE:
Tried on clean install, but nothing. Everything went well until starting cluster

[root@controller-0 ~]# pcs cluster start --all
controller-0: Starting Cluster ...
(more)
2016-05-02 14:37:48 -0500 commented question the error of service entity and API endpoints

That error (ConnectionRefused: Unable to establish connection to http://controller:35357/v3/services) means that Keystone is not running. You probably want to take a look at the keystone server log to see if you can figure out why keystone is not listening.

2016-05-02 14:36:47 -0500 edited question the error of service entity and API endpoints

Hello, I install openstack liverty version on CenOS 7. I think I strictly follow the authority doc by http://docs.openstack.org/liberty/ins....

In the perform the command "openstack service create --name keystone --description "OpenStack Identity" identity" , and encounter the error reporting" No handlers could be found for logger "oslo_config.cfg"“.

I paste the outcome when the command add --debug:

START with options: ['service', 'create', '--name', 'keystone', '--description', 'OpenStack Identity', 'identity', '--debug']
options: Namespace(access_token_endpoint='', auth_type='', auth_url='', cacert='', client_id='', client_secret='', cloud='', debug=True, default_domain='default', deferred_help=False, domain_id='', domain_name='', endpoint='', identity_provider='', insecure=None, interface='', log_file=None, os_compute_api_version='', os_identity_api_version='3', os_image_api_version='', os_network_api_version='', os_object_api_version='', os_project_id=None, os_project_name=None, os_volume_api_version='', password='', project_domain_id='', project_domain_name='', project_id='', project_name='', protocol='', region_name='', scope='', timing=False, token='ADMIN_TOKEN', trust_id='', url='http://controller:35357/v3', user_domain_id='', user_domain_name='', user_id='', username='', verbose_level=3, verify=None)
defaults: {'auth_type': 'token_endpoint', 'compute_api_version': '2', 'key': None, 'database_api_version': '1.0', 'api_timeout': None, 'baremetal_api_version': '1', 'cacert': None, 'image_api_use_tasks': False, 'floating_ip_source': 'neutron', 'orchestration_api_version': '1', 'interface': None, 'network_api_version': '2', 'image_format': 'qcow2', 'object_api_version': '1', 'image_api_version': '2', 'verify': True, 'identity_api_version': '2', 'volume_api_version': '1', 'cert': None, 'secgroup_source': 'neutron', 'dns_api_version': '2', 'disable_vendor_agent': {}}
cloud cfg: {'auth_type': 'token_endpoint', 'compute_api_version': '2', 'orchestration_api_version': '1', 'database_api_version': '1.0', 'interface': None, 'network_api_version': '2', 'image_format': 'qcow2', 'object_api_version': '1', 'image_api_version': '2', 'verify': True, 'timing': False, 'dns_api_version': '2', 'verbose_level': 3, 'region_name': '', 'api_timeout': None, 'baremetal_api_version': '1', 'auth': {'url': 'http://controller:35357/v3', 'token': 'ADMIN_TOKEN'}, 'default_domain': 'default', 'image_api_use_tasks': False, 'floating_ip_source': 'neutron', 'key': None, 'cacert': None, 'deferred_help': False, 'identity_api_version': '3', 'volume_api_version': '1', 'cert': None, 'secgroup_source': 'neutron', 'debug': True, 'disable_vendor_agent': {}}
compute API version 2, cmd group openstack.compute.v2
network API version 2, cmd group openstack.network.v2
image API version 2, cmd group openstack.image.v2
volume API version 1, cmd group openstack.volume.v1
identity API version 3, cmd group openstack.identity.v3
object_store API version 1, cmd group openstack.object_store.v1
command: service create -> openstackclient.identity.v3.service.CreateService
Auth plugin token_endpoint selected
auth_type: token_endpoint
Using auth plugin: token_endpoint
Using parameters {'url': 'http://controller:35357/v3', 'token': 'ADMIN_TOKEN'}
Get auth_ref
take_action(Namespace(columns=[], description='OpenStack Identity', disable=False, enable=False, formatter='table', max_width=0, name='keystone', noindent=False, prefix='', type='identity', variables=[]))
Instantiating identity client: <class 'keystoneclient.v3.client.Client'>
REQ: curl -g -i -X POST http://controller:35357/v3/services -H "User-Agent: python-keystoneclient" -H "Content-Type: application/json" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}9db4cd65c01e2b8de9fda967dcaaef20a2b15c84" -d '{"service": {"enabled": true, "type": "identity", "description": "OpenStack Identity", "name": "keystone"}}'
Starting new HTTP connection (1): controller
Unable to establish connection to http://controller:35357/v3/services
Traceback (most recent call last):
  [...edited by moderator for brevity...]
  File "/usr/lib/python2.7/site-packages/keystoneclient/session.py", line 430, in _send_request
    raise exceptions.ConnectionRefused(msg)
ConnectionRefused: Unable to establish connection to http://controller:35357/v3/services

END return value: 1

I hope someone friend who encounter the similar error can tell me the cause and help fix it .

2016-05-02 14:34:36 -0500 commented question Is there a way provide user data files as parameter

Can you describe exactly what you're trying to do? Maybe there is another way of tackling the issue.

2016-05-02 14:28:48 -0500 edited question Is there a way provide user data files as parameter

Hi,

I was trying to pass XML user data files for post boot configuration in heat Template.

  config_drive: "true"
  user_data : 
    str_replace: 
      template: { get_file : pboot-config.xml }

I don't want to hard code the file name ( template: { get_file : get_param) so this resource can be utilized multiple time as function. I found in Openstack Doc that get_file does not recommend to use any intrinsic functions like get_param. In this situation, Is there any other parametrize way to fetch file for post boot process. ?

2016-05-02 14:26:02 -0500 commented question glance image list issue

On what platform (centos/ubuntu/fedora/etc) are you running? How did you install OpenStack (from packages? From source?)?

2016-05-02 14:23:26 -0500 answered a question Unable to start Corosync Cluster Engine

I'm running on CentOS Linux release 7.2.1511 (Core) and I seem to have the same package versions that you have. I'm not able to reproduce your problem. You can see a complete recording of my session here:

The part where your sessions seem to go awry is at this step:

[root@controller-0 ~]# pcs cluster start --all
controller-0:
controller-2:
controller-1:

Which ought to look like this:

[root@controller-0 ~]# pcs cluster start --all
controller-1: Starting Cluster...
controller-0: Starting Cluster...
controller-2: Starting Cluster...

You probably want to look for additional diagnostics in:

  • The journal for pacemaker and corosync:

    journalctl -u pacemaker
    

    Or:

    journalctl -u corosync
    
  • The contents of /var/log/cluster/corosync.log

If you spot anything there, maybe update your question with the new information.

Also, note that yprokule is totally correct: pcs cluster start --all starts corosync for you; you shouldn't need to start it manually. You can infer this from the session recording I posted, but it's good to be explicit.

Update

Note that the current HA guide is missing an important step. It's not directly relevant to your issue, but you'll run into if you get things working.

When you run pcs cluster start --all, pcsd starts corosync and pacemaker on all the cluster nodes. It does not enable them persistently, which means that cluster services will not come up when you reboot a node.

You'll want to also run:

pcs cluster enable --all
2016-05-02 10:36:35 -0500 edited question can't restart apache2 with keystone

HI, when I restart the keystone, the 5000 port be listen by python and I didn’t restart the apache2 , i have this msg error any help PLZ ? :

root@controller:/usr/share/apache2/error# service apache2 restart
 * Restarting web server apache2                                                                                                         [ OK ]
root@controller:/usr/share/apache2/error# service keystone restart
stop: Unknown instance:
keystone start/running, process 21705
root@controller:/usr/share/apache2/error# netstat -lp | grep :5000
tcp6       0      0 [::]:5000               [::]:*                  LISTEN      21532/apache2
root@controller:/usr/share/apache2/error# service apache2 restart
 * Restarting web server apache2                                                                                                                (98)Address already in use: AH00072: make_sock: could not bind to address [::]:5000
(98)Address already in use: AH00072: make_sock: could not bind to address 0.0.0.0:5000
no listening sockets available, shutting down
AH00015: Unable to open logs
Action 'start' failed.
2016-04-08 16:03:34 -0500 answered a question How can i see the installed devstack packages

Devstack installs some packages using apt-get. You can see this if you want the output from stack.sh. Search the output log for apt_get, and you'll eventually find a line that starts with:

sudo DEBIAN_FRONTEND=noninteractive http_proxy= https_proxy= no_proxy= apt-get --option Dpkg::Options::=--force-confold --assume-yes install bc bridge-utils bsdmainutils curl g++ gcc gettext git graphviz iputils-ping...

This packages will be visible if you run dpkg -l after the install completes.

Devstack installs other packages using pip, the Python package manager. You can run pip list to see a list of Python modules that have been installed:

$ pip list
Babel (2.3.1)
beautifulsoup4 (4.3.2)
cffi (1.1.2)
chardet (2.3.0)
colorama (0.3.3)
cryptography (1.0.1)
debtcollector (1.3.0)
enum34 (1.0.4)
euca2ools (3.1.0)
extras (0.0.3)
fixtures (1.4.0)
funcsigs (1.0.0)
git-review (1.25.0)
html5lib (0.999)
httplib2 (0.9.2)
idna (2.0)
ipaddress (1.0.14)
iso8601 (0.1.11)
keystoneauth1 (2.5.0)
keystonemiddleware (4.4.0)
...

Generally, all of the openstack services will be installed using pip.

2016-03-23 09:54:18 -0500 received badge  Great Answer (source)
2016-03-23 09:54:18 -0500 received badge  Guru (source)
2016-01-27 00:52:14 -0500 received badge  Nice Answer (source)
2015-11-27 04:05:38 -0500 received badge  Good Answer (source)
2015-11-25 09:51:53 -0500 answered a question Running a python script in Heat to configure VNF

For the user-data script to have any affect on your instance, the image you are booting needs to have tooling installed to fetch and execute that image. For most major distributions (CentOS, Red Hat, Fedora, Ubuntu, etc), this is cloud-init, and is installed on the cloud images for these distributions.

Cirros uses an alternative mechanism, which means that you probably don't want to use it for testing because it will not support the same sort of user-data scripts that larger distributions will support (for example, python isn't available on the cirros image, so you wouldn't be able to test out a python script in any case).

Your simple test ought to work. There are a number of diagnostic steps you can perform to figure out what went awry...

First, check if the metadata service is available. After logging into the cirros instance, try running:

# curl http://169.254.169.254/

This should return:

1.0
2007-01-19
2007-03-01
2007-08-29
2007-10-10
2007-12-15
2008-02-01
2008-09-01
2009-04-04

If you get an error response, this suggests that there are probably configuration issues in your openstack environment.

If the above worked, trying fetching the user-data script by running:

# curl http://169.254.169.254/latest/user-data

This should return your user-data script, e.g:

#!/bin/bash
echo "Hello World" > /tmp/test.txt

If that worked, see if the cirros tooling fetched it correctly. Run:

# cirros-query get user-data

This should return the same thing.

I suspect that one of the above steps will produce some useful diagnostic information. If you do see any errors, feel free to update your question with the additional information and we'll see if we can get things working.

2015-11-25 09:33:59 -0500 edited question Running a python script in Heat to configure VNF

Hi,

I could create networks with VM/VNF's using Heat. I came across a link where we can use scripts to configure the VM on boot up. I used horizon to try boot config. I wrote a simple script to create a file (in post-creation tab - while launching an instance) in cirros on boot up but it shows no effect.

#!/bin/bash
echo "Hello World" > /tmp/test.txt

Could someone please let me know the solution? Also, could someone please share a sample python script for boot config ?

2015-11-25 09:31:09 -0500 commented question Liberty Failed to bind port on host compute

Are there any errors in the neutron logs (particularly the openvswitch agent log) on the compute node?

2015-11-08 13:33:39 -0500 received badge  Good Answer (source)
2015-10-26 01:48:55 -0500 received badge  Good Answer (source)
2015-09-04 04:43:14 -0500 marked best answer Error upgrading database schema: ERROR: Tables "..." have non utf8 collation, please make sure all tables are CHARSET=utf8

When I try to performa db_sync operation to upgrade my database schema from Havana to Icehouse, I encounter an error similar to the following:

# heat-manage db_sync
ERROR: Tables "event,migrate_version,raw_template,resource,resource_data,stack,user_creds,watch_data,watch_rule" have non utf8 collation, please make sure all tables are CHARSET=utf8

This error is for Heat, but I see a similar error with Glance and Keystone.

2015-08-28 08:21:40 -0500 commented answer Instance failing to spawn, logs show `NovaException: Unexpected vif_type=binding_failed`

Salman, I was not the one who posted this answer. That was Liam.

2015-08-24 11:13:53 -0500 commented question How to get a list of projects and provide it as a dropdown menu in openstack

Have you looked at the OpenStack API documentation? http://developer.openstack.org/api-re...

2015-08-24 09:22:27 -0500 answered a question neutron-rootwrap appears unable to kill dnsmasq processes

It looks you have hit this bug that I reported a few weeks ago. The problem is that rootwrap is attempting to validate an attempt to run the kill command with root privileges, and it is trying to match the path of the process being killed against a list of known paths (like /usr/bin/dnsmasq).

The error occurs because during an upgrade the file is first renamed and then deleted. The result is that the path stored in /proc/<PID>/exe no longer matches any of the known paths.

This change (which has not yet merged) should correct the problem.

2015-08-14 20:24:06 -0500 answered a question what is use of ec2-credentials table in keystone database?

That table is used to support the Amazon EC2 compatible API. Authentication to the EC2 API requires an access key and a secret key. This table stores the credentials used to support this authentication.

2015-08-14 20:18:58 -0500 answered a question modify virtual machine image before upload it to image service (GLANCE)

There is a version of cloud-init for Windows, which can be used to perform arbitrary system configuration tasks when you first boot an image in OpenStack. This article talks about how you can create a Windows image that uses cloudbase-init.

2015-08-14 20:15:49 -0500 answered a question If commit message doesn't contain "Closes-Bug"

There may not be a bug report associated with those changes.

The second one you listed appears to be part of the work involved in implementing the arch-guide blueprint. This is a very common case -- a lot of the work going into OpenStack is not related to bug fixes, but is new feature implementation.

2015-08-14 20:12:02 -0500 commented question instance creations are failing in Kilo

Are there any ERROR messages showing up in your nova logs? I would probably start with the nova-compute log on your compute hosts.

2015-08-14 20:10:49 -0500 commented question Slow Neutron Network on Outbound Nova-Docker Instance Traffic

What is the MTU on your physical NICS? In an environment with a standard 1500 byte MTU, you have to reduce the size of the instance MTU (in this case, your Docker containers) to allow space for the tunneling metadata. E.g., set your container MTU to 1450 and see if you still see the same issue.

2015-08-14 15:58:32 -0500 commented question neutron-rootwrap appears unable to kill dnsmasq processes

Had you at any point upgraded dnsmasq (while the dhcp agent was running)? If you have automatic upgrades enabled, check /var/log/yum.log to see if such an upgrade happened.

2015-07-17 18:07:52 -0500 received badge  Good Answer (source)
2015-07-10 07:23:01 -0500 received badge  Nice Answer (source)
2015-07-02 16:12:25 -0500 received badge  Nice Answer (source)
2015-07-02 15:53:00 -0500 received badge  Good Answer (source)
2015-05-22 20:08:45 -0500 received badge  Good Answer (source)