Ask Your Question

bcollins's profile - activity

2018-08-17 22:42:48 -0600 received badge  Taxonomist
2018-07-04 05:32:41 -0600 received badge  Famous Question (source)
2018-06-13 07:54:58 -0600 received badge  Famous Question (source)
2017-06-27 08:52:25 -0600 received badge  Famous Question (source)
2017-06-27 08:52:25 -0600 received badge  Notable Question (source)
2017-04-19 16:28:52 -0600 received badge  Famous Question (source)
2017-04-15 14:56:33 -0600 received badge  Notable Question (source)
2017-04-15 14:56:33 -0600 received badge  Popular Question (source)
2017-03-30 15:43:22 -0600 received badge  Famous Question (source)
2017-03-28 08:17:02 -0600 received badge  Notable Question (source)
2017-03-27 12:21:08 -0600 received badge  Famous Question (source)
2017-03-14 13:55:06 -0600 received badge  Popular Question (source)
2017-03-10 12:09:45 -0600 received badge  Famous Question (source)
2017-03-05 10:25:28 -0600 answered a question The request you have made requires authentication. (HTTP 401) (Request-ID: req-51dd1e06-12af-4074-88d8-5078ae5295e5)

you need to source credentials first such as an adminrc.sh:

be sure to make you OS_TENANT_ID match the project you are trying to access.

#!/usr/bin/env bash

    # To use an OpenStack cloud you need to authenticate against the Identity
    # service named keystone, which returns a **Token** and **Service Catalog**.
    # The catalog contains the endpoints for all services the user/tenant has
    # access to - such as Compute, Image Service, Identity, Object Storage, Block
    # Storage, and Networking (code-named nova, glance, keystone, swift,
    # cinder, and neutron).
    #
    # *NOTE*: Using the 2.0 *Identity API* does not necessarily mean any other
    # OpenStack API is version 2.0. For example, your cloud provider may implement
    # Image API v1.1, Block Storage API v2, and Compute API v2.0. OS_AUTH_URL is
    # only for the Identity API served through keystone.
    export OS_AUTH_URL=http://keystone:5000/v2.0

    # With the addition of Keystone we have standardized on the term **tenant**
    # as the entity that owns the resources.
    export OS_TENANT_ID=d53fb31b2a7b4442ad2fdbaddb57fba3
    export OS_TENANT_NAME="admin"

    # unsetting v3 items in case set
    unset OS_PROJECT_ID
    unset OS_PROJECT_NAME
    unset OS_USER_DOMAIN_NAME

    # In addition to the owning entity (tenant), OpenStack stores the entity
    # performing the action as the **user**.
    export OS_USERNAME="admin"

    # With Keystone you pass the keystone password.
    echo "Please enter your OpenStack Password: "
    read -sr OS_PASSWORD_INPUT
    export OS_PASSWORD=$OS_PASSWORD_INPUT

    # If your configuration has multiple regions, we set that information here.
    # OS_REGION_NAME is optional and only valid in certain environments.
    export OS_REGION_NAME="US-ALPH"
    # Don't leave a blank variable, unset it if it was empty
    if [ -z "$OS_REGION_NAME" ]; then unset OS_REGION_NAME; fi

    export PS1='[\u@\h \W(${OS_USERNAME}@${OS_TENANT_NAME})]\$ '
2017-03-05 10:10:59 -0600 asked a question cinder scheduler Filtering removed all hosts for the request - ocata centos7 RDO

new deploy cinder services with known working nfs mounts. This really used to work... Im confused

==> /var/log/cinder/scheduler.log <==
2017-03-05 11:07:22.565 19551 INFO cinder.scheduler.base_filter [req-f2479151-3524-438f-8f90-8868554d4944 - - - - -] Filtering removed all hosts for the request with volume ID 'f2b514f1-2201-4e93-b9c5-b09e1694e0ae'. Filter results: AvailabilityZoneFilter: (start: 2, end: 2), CapacityFilter: (start: 2, end: 2), CapabilitiesFilter: (start: 2, end: 0)
2017-03-05 11:07:22.566 19551 WARNING cinder.scheduler.filter_scheduler [req-f2479151-3524-438f-8f90-8868554d4944 - - - - -] No weighed backend found for volume with properties: {'name': u'SSHD_NAS', 'qos_specs_id': None, 'deleted': False, 'created_at': '2017-03-05T14:45:43.000000', 'updated_at': None, 'extra_specs': {u'volume_type_name': u'SSHD_NAS', u'volume_backend_name': u'SSHD_NAS'}, 'is_public': True, 'deleted_at': None, 'id': '8fa9c023-dc5c-46bf-9848-2eae987b8aa9', 'projects': [], 'description': u''}
2017-03-05 11:07:22.567 19551 INFO cinder.message.api [req-f2479151-3524-438f-8f90-8868554d4944 - - - - -] Creating message record for request_id = req-f2479151-3524-438f-8f90-8868554d4944


==> /var/log/cinder/scheduler.log <==
2017-03-05 11:07:22.623 19551 ERROR cinder.scheduler.flows.create_volume [req-f2479151-3524-438f-8f90-8868554d4944 - - - - -] Failed to run task cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create: No valid backend was found. No weighed backends available
2017-03-05 08:34:51 -0600 commented answer default security group rules

im still searching for this as well. its nice information to know.

2017-03-04 18:10:17 -0600 answered a question ocata upgrade - CentOS7 / RDO / GlusterFS3.8 No module named glusterfs (solved)

so if you ever have RDO issues and go from mitaka to newton on CentOS7 and need to go to Ocata and lose the ability as i did. Move your gluster to NFS Ganesha.... and remount as nfs...

if it had glusterfs replace it with nfs options as this example below.

[SSHD_NAS]
volume_driver= cinder.volume.drivers.nfs.NfsDriver
nfs_shares_config= /etc/cinder/nfs_shares.conf
volume_backend_name= SSHD_NAS
nfs_sparsed_volumes = True

https://gluster.readthedocs.io/en/lat...

2017-03-04 08:12:43 -0600 commented question cinder services is not starting for mitaka centos 7

What does the cinder logs say?

2017-03-04 08:09:23 -0600 answered a question ocata install nova error

I had to push from mitaka to newton to ocata to get the nova cell_v2 commands like create. My solution was to ensure transport_url is correct and alter the db entry for cell0 to point to nova_api_cell0 or as i did, drop the cell0 db and rerun packstack from ocata and it will create nova_cell0 and map correctly. https://ask.openstack.org/en/question...

2017-03-03 18:09:04 -0600 commented question COA exam openstack version Because Liberty was Out Dated ?
2017-03-03 17:58:14 -0600 commented question ERROR nova.virt.libvirt.driver with Glusterfs in /var/lib/nova/instances

what about the cinder volume log?

2017-03-03 17:54:13 -0600 commented question ocata install nova error

i have also found in rdo that upgrade to newton nova created nova_api_cell0 where as in ocata it creates nova_cell0 which is what the default cell0 mapping points to...

2017-03-03 17:19:46 -0600 answered a question RDO install newton in the dashboard can't create images.

if this does not work check the perms on your glance upload dir (commonly some form of tmp with 777) and set your glance dir to 774

2017-03-03 15:44:18 -0600 commented answer block-migration is not working from command line

from the request it looks like you are trying live. server-1 is not on shared storage: Live migration can not be used without shared storage

power the server off and try to migrate it. or remove the --live option......

2017-03-03 10:35:05 -0600 answered a question block-migration is not working from command line

is your /var/lib/nova/instances shared and visible between the 2 servers? I know for live migration to work each server instances must me a shared mount.

2017-03-03 09:28:23 -0600 received badge  Notable Question (source)
2017-03-03 09:26:37 -0600 answered a question default security group rules

if you login to the project and open the projects default security group, you can edit that and each machine deploy on that project will use that default if no other is specified.

2017-03-03 08:20:05 -0600 answered a question Unable to login to horizon

I at one time broke my dashboard as well. this post may be useful...

https://ask.openstack.org/en/question...

2017-03-02 18:39:37 -0600 received badge  Notable Question (source)
2017-03-02 17:48:52 -0600 received badge  Popular Question (source)
2017-03-02 13:31:54 -0600 edited question ocata upgrade - CentOS7 / RDO / GlusterFS3.8 No module named glusterfs (solved)

From the cinder release notes:

--- upgrade: - The GlusterFS volume driver, which was deprecated in the Newton release, has been removed.

Any idea how i can get this back?

gluster volumes and all visiable and mountable... validated all this as i can see instances in glance and volumes... controller even has a scratch mount share fomr the same gluster (/openstack_scratch type fuse.glusterfs) so i know its working...

problem is after the upgrade to ocata same cinder.conf settings as i have had and i am now getting an error: python modules exist

[root@platform2 site-packages]# ls -ald cinder*
drwxr-xr-x 24 root root 4096 Mar  2 09:07 cinder
drwxr-xr-x  2 root root  143 Mar  2 09:07 cinder-10.0.0-py2.7.egg-info
drwxr-xr-x  6 root root 4096 Feb 26 17:51 cinderclient
drwxr-xr-x  2 root root  136 Mar  2 19:09 cinder_data
drwxr-xr-x  2 root root  112 Mar  2 19:09 cinder_data-0.0.1-py2.7.egg-info
drwxr-xr-x  2 root root   44 Mar  2 09:07 cinder_tests.egg-info

[root@platform2 site-packages]# ls -ald gluster*
drwxr-xr-x 5 root root   96 Mar  1 10:38 gluster
drwxr-xr-x 2 root root  112 Mar  1 10:36 glustercli-0.2-py2.7.egg-info
drwxr-xr-x 4 root root   68 Mar  1 10:38 gluster_georep_tools
drwxr-xr-x 2 root root 4096 Mar  1 10:38 gluster_georep_tools-0.2-py2.7.egg-info
drwxr-xr-x 2 root root   90 Mar  2 19:07 gluster_stats
drwxr-xr-x 2 root root  147 Mar  2 19:07 gluster_stats-1.2.2.dist-info
[root@platform2 site-packages]#

debug starting the service.

[root@platform2 sahara]# 2017-02-28 20:06:11.782 17241 WARNING oslo_reports.guru_meditation_report [-] Guru meditation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
2017-02-28 20:06:11.782 17241 DEBUG oslo_concurrency.lockutils [-] Acquired semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:212
2017-02-28 20:06:11.783 17241 DEBUG oslo_concurrency.lockutils [-] Releasing semaphore "singleton_lock" lock /usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py:225
2017-02-28 20:06:11.802 17241 WARNING oslo_config.cfg [-] Option "notification_driver" from group "DEFAULT" is deprecated. Use option "driver" from group "oslo_messaging_notifications".
2017-02-28 20:06:12.011 17241 DEBUG oslo_db.sqlalchemy.engines [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:261
2017-02-28 20:06:12.164 17241 INFO cinder.rpc [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] Automatically selected cinder-volume objects version 1.21 as minimum service version.
2017-02-28 20:06:12.178 17241 INFO cinder.rpc [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] Automatically selected cinder-scheduler objects version 1.21 as minimum service version.
2017-02-28 20:06:12.184 17241 INFO cinder.rpc [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] Automatically selected cinder-scheduler RPC version 3.5 as minimum service version.
2017-02-28 20:06:12.298 17241 DEBUG oslo_db.sqlalchemy.engines [req-d8c9e611-7236-46d0-ba93-d68cb911a093 - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:261
2017-02-28 20:06:12.334 17241 INFO cinder.volume.manager [req-d8c9e611-7236-46d0-ba93-d68cb911a093 ...
(more)
2017-03-01 08:29:49 -0600 received badge  Popular Question (source)
2017-03-01 08:12:48 -0600 received badge  Notable Question (source)
2017-03-01 08:08:36 -0600 marked best answer WARNING horizon.tables.actions Action (u'scheduled deletion of instance', u'external-testing') Failed for Instance could not be found. (HTTP 404)

after upgrade from newton to ocata i am not able to power on instances, create instances or delete instances....

2017-02-28 00:43:25,837 7853 WARNING horizon.tables.actions Action (u'scheduled deletion of instance', u'external-testing') Failed for Instance 23a8a1d2-bfe9-432d-ab14-b9d89c352345 could not be found. (HTTP 404) (Request-ID: req-918cee88-4c06-4737-aadd-a1e507431f7e).

No idea but will start debugging to add more.

2017-03-01 07:55:52 -0600 commented answer WARNING horizon.tables.actions Action (u'scheduled deletion of instance', u'external-testing') Failed for Instance could not be found. (HTTP 404)

that is exactly what happened!!! lol i searched and searched for this guest entry and found it in the db under nova_api.cell_mappings

2017-03-01 07:01:49 -0600 received badge  Popular Question (source)
2017-03-01 03:52:57 -0600 received badge  Notable Question (source)