Ask Your Question

messeiry's profile - activity

2017-10-17 08:37:42 -0500 received badge  Notable Question (source)
2017-05-31 07:58:46 -0500 received badge  Enthusiast
2017-05-30 12:42:33 -0500 asked a question Object DELETE failed: There was a conflict when trying t Removing the current plan files

i deployed Redhat OpenStack Before and the deployment was successful, i removed the overcloud using heat stack-delete overcloud, do introspection again and then re-deploy.

i am getting this weired error

[stack@director ~]$ ./deploy-all-SSL.sh
Object DELETE failed: http://192.0.2.1:8080/v1/AUTH_d9ebd9ca4226404fb1456bb2b7849652/overcloud/all-nodes-validation.yaml 409 Conflict  [first 60 chars of response] <html><h1>Conflict</h1><p>There was a conflict when trying t
Removing the current plan files

i dont know what it means or how can i drill down to fix it.

i script i am using to deploy is "deploy-all-SSl.sh"

#!/bin/bash
openstack overcloud deploy \
 --templates /usr/share/openstack-tripleo-heat-templates/ \
 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
 `for n in ~/templates/*environment*.yaml; do echo -n "-e $n "; done` \
 -e /home/stack/templates/enable-tls.yaml \
 -e /home/stack/templates/inject-trust-anchor.yaml \
 -e /usr/share/openstack-tripleo-heat-templates/environments/tls-endpoints-public-ip.yaml \
 --ntp-server 192.0.2.200 \
 --control-flavor control \
 --compute-flavor compute \
 --ceph-storage-flavor ceph-storage \
 --control-scale 3 \
 --compute-scale 3 \
 --ceph-storage-scale 3 \
 --neutron-tunnel-types vxlan \
 --neutron-network-type vxlan | tee openstack-deployment-ssl.log
2017-05-11 06:15:46 -0500 received badge  Popular Question (source)
2017-04-12 06:51:42 -0500 answered a question openstack baremetal introspection data save [UUID] not found

i figured out the issue i had, it's in the packages when i first registered my director machine. i acceidently enabled teh repo from openstack 7 instead of 10.

once i enabled the repo and did an update

i can see teh following commands, now i will re-deploy the director with the correct repos.

baremetal introspection abort              quota set
baremetal introspection bulk start         quota show
baremetal introspection bulk status        recordset create
baremetal introspection data save          recordset delete
baremetal introspection reprocess          recordset list
baremetal introspection rule delete        recordset set
baremetal introspection rule import        recordset show
baremetal introspection rule list          resource member create
baremetal introspection rule purge         resource member delete
baremetal introspection rule show          resource member list
baremetal introspection start              resource member show
baremetal introspection status             resource member update

thanks

2017-04-12 05:09:53 -0500 asked a question openstack baremetal introspection data save [UUID] not found

Hello Team

am trying to deploy the tripleO Redhat OpenStack, the introspection finishs fine and no errors, now i need to access teh data collected, how can i do that.

i found a command online

openstack baremetal introspection data save [UUID]

am getting the error:

[stack@director ~]$ openstack baremetal introspection data save
ERROR: openstack Unknown command ['baremetal', 'introspection', 'data', 'save']

anyone help please

2017-02-27 04:12:17 -0500 received badge  Famous Question (source)
2017-02-23 01:41:20 -0500 received badge  Famous Question (source)
2017-02-17 04:37:00 -0500 received badge  Notable Question (source)
2017-02-17 04:37:00 -0500 received badge  Popular Question (source)
2016-12-27 09:14:53 -0500 received badge  Student (source)
2016-12-27 09:14:52 -0500 received badge  Notable Question (source)
2016-12-27 09:14:52 -0500 received badge  Popular Question (source)
2016-12-13 16:55:45 -0500 asked a question There are not enough hosts available Docker Swarm Creation Cluster error in Magnum

Hello I am recieving the following error in the magnum, when i try to create a cluster of docker swarm

2016-12-13 14:53:27.227 25775 ERROR magnum.conductor.handlers.cluster_conductor [req-5bcf5521-3ebd-40fd-ac71-225c407147b2 admin admin - - -] Cluster error, stack status: CREATE_FAILED, stack_id: 0524c8f4-bdb9-442b-8a82-64d662690a78, reason: Resource CREATE failed: ResourceInError: resources.swarm_masters.resources[0].resources.swarm_master: Went to status ERROR due to "Message: No valid host was found. There are not enough hosts available., Code: 500"

any clues

2016-12-09 19:21:12 -0500 answered a question cinder-volume CappedVersionUnknown

i found the workaround to make this all work, first error is

2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume CappedVersionUnknown: Unrecoverable Error: Versioned Objects in DB are capped to unknown version 1.11.

vim /usr/lib/python2.7/dist-packages/cinder/objects/base.py

# Hacked by messeiry error in version 1.11, this will add version 1.11 to the supported versions, the code on Githib is uch uipdated than this one, 
OBJ_VERSIONS.add('1.11', {'GroupSnapshot': '1.0', 'GroupSnapshotList': '1.0','Group': '1.1'})

after that i recieve the following errors and i edited those files to work around it, it's basically an issue with the version i have installed for cinder-volume and schedukler, but also a problem in the oslo-messaging.these should be update on the openstack documentation for newton. otherwise such installations will fail. i will try at some other time to replicate the issue with different installation p[ackages or just deploy directly from source on github.

the following error appeared

2016-12-09 11:50:31.349 94962 INFO oslo_service.service [req-da18d2d1-4048-4f26-9288-bc88061822b8 - - - - -] Child 109572 exited with status 1
2016-12-09 11:50:31.358 109601 INFO cinder.service [-] Starting cinder-volume node (version 8.1.1)
2016-12-09 11:50:31.360 109601 INFO cinder.volume.manager [req-f861837a-c0a8-47b4-9264-93dfc3d76d65 - - - - -] Starting volume driver LVMVolumeDriver (3.0.0)
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service [req-f861837a-c0a8-47b4-9264-93dfc3d76d65 - - - - -] Error starting thread.
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service Traceback (most recent call last):
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service   File "/usr/lib/python2.7/dist-packages/oslo_service/service.py", line 680, in run_service
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service     service.start()
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service   File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 166, in start
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service     self.manager.init_host()
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 513, in init_host
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service     self.publish_service_capabilities(ctxt)
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 2047, in publish_service_capabilities
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service     self._publish_service_capabilities(context)
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service   File "/usr/lib/python2.7/dist-packages/cinder/manager.py", line 173, in _publish_service_capabilities
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service     self.last_capabilities)
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service   File "/usr/lib/python2.7/dist-packages/cinder/scheduler/rpcapi.py", line 165, in update_service_capabilities
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service     capabilities=capabilities)
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service   File "/usr/lib/python2.7/dist-packages/oslo_messaging/rpc/client.py", line 135, in cast
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service     if self.version_cap:
2016-12-09 11:50:32.241 109601 ERROR oslo_service.service   File "/usr/lib/python2.7/dist-packages/oslo_messaging ...
(more)
2016-12-09 19:21:12 -0500 commented answer cinder-volume service is down

hello am having the same issue, did you reach a resolution, its so frustrating, there isnt enough posts on this. https://ask.openstack.org/en/question/100163/cinder-volume-cappedversionunknown/ (https://ask.openstack.org/en/question...)

2016-12-09 19:21:10 -0500 asked a question cinder-volume CappedVersionUnknown

Hello Team, am receiving the following error in cinder-volume node. iam implimenting openstack-newton

folowed the guide online at http://docs.openstack.org/newton/install-guide-ubuntu/cinder-storage-install.html (http://docs.openstack.org/newton/inst...)

2016-12-08 14:41:40.901 10516 ERROR cinder.cmd.volume [req-a15d7a68-d2e5-4001-8cd0-92281287b5a7 - - - - -] No volume service(s) started successfully, terminating.
2016-12-08 14:41:42.196 10537 WARNING oslo_reports.guru_meditation_report [-] Guru mediation now registers SIGUSR1 and SIGUSR2 by default for backward compatibility. SIGUSR1 will no longer be registered in a future release, so please use SIGUSR2 to generate reports.
2016-12-08 14:41:42.471 10537 WARNING py.warnings [req-0ec6b251-79a9-4055-aac5-c907de77e700 - - - - -] /usr/lib/python2.7/dist-packages/oslo_db/sqlalchemy/enginefacade.py:241: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume [req-0ec6b251-79a9-4055-aac5-c907de77e700 - - - - -] Volume service cinder@cinder failed to start.
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume Traceback (most recent call last):
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume   File "/usr/lib/python2.7/dist-packages/cinder/cmd/volume.py", line 81, in main
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume     binary='cinder-volume')
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume   File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 268, in create
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume     service_name=service_name)
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume   File "/usr/lib/python2.7/dist-packages/cinder/service.py", line 150, in __init__
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume     *args, **kwargs)
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume   File "/usr/lib/python2.7/dist-packages/cinder/volume/manager.py", line 235, in __init__
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume     *args, **kwargs)
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume   File "/usr/lib/python2.7/dist-packages/cinder/manager.py", line 156, in __init__
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume     self.scheduler_rpcapi = scheduler_rpcapi.SchedulerAPI()
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume   File "/usr/lib/python2.7/dist-packages/cinder/rpc.py", line 188, in __init__
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume     serializer = base.CinderObjectSerializer(obj_version_cap)
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume   File "/usr/lib/python2.7/dist-packages/cinder/objects/base.py", line 412, in __init__
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume     raise exception.CappedVersionUnknown(version=version_cap)
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume CappedVersionUnknown: Unrecoverable Error: Versioned Objects in DB are capped to unknown version 1.11.
2016-12-08 14:41:42.642 10537 ERROR cinder.cmd.volume 
2016-12-08 14:41:42.646 10537 ERROR cinder.cmd.volume [req-0ec6b251-79a9-4055-aac5-c907de77e700 - - - - -] No volume service(s) started successfully, terminating.

^C

the output of

root@cinder:/home/messeiry# vgs
  VG             #PV #LV #SN Attr   VSize   VFree  
  cinder-volumes   1   0   0 wz--n- 100.00g 100.00g
root@cinder:/home/messeiry# pvs
  PV         VG             Fmt  Attr PSize   PFree  
  /dev/sdb   cinder-volumes lvm2 a--  100.00g 100.00g ...
(more)