Ask Your Question

sunnyarora's profile - activity

2018-10-05 04:36:32 -0500 commented answer How to delete volume with available status and attached to

Hope this is fixed now in 2018, way back in 2015 the steps given by you were not working and thats why we adopted the workaround. -Thanks

2017-12-01 12:49:43 -0500 received badge  Nice Answer (source)
2016-12-06 10:34:55 -0500 received badge  Nice Answer (source)
2016-10-26 06:44:08 -0500 edited question No valid hosts were found when more than 2 instances
Error: Failed to perform requested operation on instance "test_vol", the instance has an error status: Please try again later [Error: No valid host was found. There are not enough hosts available.].

I have 2 instances happily running.  2 compute hosts with 32 gigs of ram each and if I delete one of the instances I can happily create another.  It seems to be some sort of filtering, but I am unable to figure it out.

2016-10-25 18:52:32.022 961 WARNING nova.scheduler.host_manager [req-3fb7ced9-65                              4e-4d5d-a34a-8b7a7b80846b fbcf075d129b4934ad5c203cc1f8bca8 57c72eeae27a4b6dbca7e                              9ddf50cf521 - - -] Host compute1 has more disk space than database expected (39                               GB > 19 GB)
2016-10-25 18:52:32.023 961 WARNING nova.scheduler.host_manager [req-3fb7ced9-65                              4e-4d5d-a34a-8b7a7b80846b fbcf075d129b4934ad5c203cc1f8bca8 57c72eeae27a4b6dbca7e                              9ddf50cf521 - - -] Host compute2 has more disk space than database expected (39                               GB > 19 GB)
2016-10-25 18:52:32.024 961 INFO nova.filters [req-3fb7ced9-654e-4d5d-a34a-8b7a7                              b80846b fbcf075d129b4934ad5c203cc1f8bca8 57c72eeae27a4b6dbca7e9ddf50cf521 - - -]                               Filter DiskFilter returned 0 hosts
2016-10-25 18:52:32.024 961 INFO nova.filters [req-3fb7ced9-654e-4d5d-a34a-8b7a7                              b80846b fbcf075d129b4934ad5c203cc1f8bca8 57c72eeae27a4b6dbca7e9ddf50cf521 - - -]                               Filtering removed all hosts for the request with instance ID '7f087f04-dd9c-43c                              5-a559-44bcdc189da1'. Filter results: ['RetryFilter: (start: 2, end: 2)', 'Avail                              abilityZoneFilter: (start: 2, end: 2)', 'RamFilter: (start: 2, end: 2)', 'DiskFi                              lter: (start: 2, end: 0)']
2016-10-17 22:44:54 -0500 commented question neutron policy didn't work

You restarted all Neutron services to take this into effect ?

2016-10-16 23:15:40 -0500 edited question Mitaka to Newton upgrade (Cinder complains ServiceTooOld)

I upgraded from Mitaka to Newton, and after the upgrade Cinder dies with, not sure what I need to do to correct issue so Cinder will start?

2016-10-14 15:41:32.431 1379701 INFO cinder.rpc [req-9b1d2a01-2747-4320-beed-508481bd3d73 - - - - -] Automatically selected cinder-scheduler objects version 1.11 as minimum service version.
2016-10-14 15:41:32.438 1379701 INFO cinder.rpc [req-9b1d2a01-2747-4320-beed-508481bd3d73 - - - - -] Automatically selected cinder-scheduler RPC version 3.0 as minimum service version.
2016-10-14 15:41:32.446 1379701 CRITICAL cinder [req-9b1d2a01-2747-4320-beed-508481bd3d73 - - - - -] ServiceTooOld: One of the services is in Liberty version. We do not provide backward compatibility with Liberty now, you need to upgrade to Mitaka first.
2016-10-14 15:41:32.446 1379701 ERROR cinder Traceback (most recent call last):
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/bin/cinder-api", line 10, in <module>
2016-10-14 15:41:32.446 1379701 ERROR cinder     sys.exit(main())
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/cinder/cmd/api.py", line 60, in main
2016-10-14 15:41:32.446 1379701 ERROR cinder     server = service.WSGIService('osapi_volume')
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/cinder/service.py", line 499, in __init__
2016-10-14 15:41:32.446 1379701 ERROR cinder     self.app = self.loader.load_app(name)
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/oslo_service/wsgi.py", line 353, in load_app
2016-10-14 15:41:32.446 1379701 ERROR cinder     return deploy.loadapp("config:%s" % self.config_path, name=name)
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 247, in loadapp
2016-10-14 15:41:32.446 1379701 ERROR cinder     return loadobj(APP, uri, name=name, **kw)
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 272, in loadobj
2016-10-14 15:41:32.446 1379701 ERROR cinder     return context.create()
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 710, in create
2016-10-14 15:41:32.446 1379701 ERROR cinder     return self.object_type.invoke(self)
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/paste/deploy/loadwsgi.py", line 144, in invoke
2016-10-14 15:41:32.446 1379701 ERROR cinder     **context.local_conf)
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/paste/deploy/util.py", line 55, in fix_call
2016-10-14 15:41:32.446 1379701 ERROR cinder     val = callable(*args, **kw)
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/cinder/api/__init__.py", line 34, in root_app_factory
2016-10-14 15:41:32.446 1379701 ERROR cinder     return paste.urlmap.urlmap_factory(loader, global_conf, **local_conf)
2016-10-14 15:41:32.446 1379701 ERROR cinder   File "/usr/lib/python2.7/site-packages/paste/urlmap.py", line 25, in urlmap_factory
2016-10-14 15:41:32.446 1379701 ERROR cinder     app = loader.get_app(app_name, global_conf=global_conf)
2016-10-14 15:41:32.446 1379701 ERROR ...
(more)
2016-10-16 09:01:26 -0500 answered a question how to remove compute host in controller ?? unfortunately i removed that host but name still available

That is possible, But make sure to take the backup of database, before you make any changes. Suppose for example i have two compute nodes i.e node1 and node2, and my node1 host is down or got corrupted or i have removed it physically. Then we need to remove node1 name from controller database.

Step1 : Check for available hypervisors by below command.

root@controller1:~# nova hypervisor-list
+----+---------------------+
| ID | Hypervisor hostname |
+----+---------------------+
| 1  | node1               |
| 4  | node2               |
+----+---------------------+

Step 2: check of nova service's, you should also able to see the service down for node1.

 root@controller1:~# nova service-list

Step3: Login to my mysql db and select nova database.

root@controller1:~#  mysql -u root
mysql>USE nova;

Run the following SQL command to check entries of hypervisors in database :

mysql> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes;

Step 4: After running above command you must be able to see the 'id' of host 'node1". Suppose that id is 1. Delete the hypervisor from the nova database by running below SQL commands.

mysql>DELETE FROM compute_node_stats WHERE compute_node_id='1';

mysql>DELETE FROM compute_nodes WHERE hypervisor_hostname='node1;

Step 5 : In This step , we will need to also delete the service of "node1", which is down as we have checked in step2.

mysql>DELETE FROM services WHERE host='node1';

You are done,, check nova hypervisor-list and nova service-list to confirm that there is no entry for host which is not present or removed permanently.

2016-10-16 04:08:41 -0500 answered a question Packstack - Preinstalling Puppet and discovering hosts' details[ ERROR ]

seems there are some dependencies.

First of all check the systems /etc/resolv.conf file. make sure you are able to resolve DNS so that pakages can be pulled over the internet. Then run yum repolist and check if all required repositories present. The required repositories need be installed and enabled on your server, like:

Extra Packages for Enterprise Linux (EPEL)

OpenStack Jonu/kilo/ Repository

Then run yum clean all now install some dependencies.

yum install -y puppet hiera openssh-clients tar nc rubygem-json erlang
systemctl start puppet
systemctl status puppet

Rerun packstack installation or with the anser file.

packstack --answer-file=./answer-file-previous.txt

Let us know the Outcome. -thanks

2016-10-16 03:45:29 -0500 commented question VM doesn't get IP

Just curious to know, is this inbuilt cirros VM or any other Custom VM which you created ? @steve

2016-10-16 03:38:25 -0500 commented question [Nova Scheduler]Is there a way to choose in which processor's core a VM will run?

I am not sure, but have you seen this Article ?

Also read abt overcommit ratio Thread

2016-10-13 06:56:15 -0500 answered a question Where can I find a compatibility list of hardware for OpenStack (any version) with CentOS?

You will need to check Hardware Compatibility List for Openstack (redhat)

I can see you mentioned as CentOS, But they as well ask you to check Redhat HCL,Check here

2016-10-13 04:37:04 -0500 edited question Cinder log errors

I am continuosly seeing such error in my cinder volume.log .

2016-10-11 09:08:11.927 30366 INFO
cinder.volume.drivers.emc.scaleio
[req-a13a712d-053f-45d1-b1ff-9cf12f6605ce
- - - - -] Get domain by name response: {"message":"Not
found","httpStatusCode":500,"errorCode":3}
2016-10-11 09:08:11.928 30366 ERROR
cinder.volume.drivers.emc.scaleio
[req-a13a712d-053f-45d1-b1ff-9cf12f6605ce
- - - - -] Error getting domain id from name default: Not found.
2016-10-11 09:08:11.932 30366 ERROR
oslo_service.service
[req-a13a712d-053f-45d1-b1ff-9cf12f6605ce
- - - - -] Error starting thread. 2016-10-11 09:08:11.932 30366 ERROR
oslo_service.service Traceback (most
recent call last): 2016-10-11
09:08:11.932 30366 ERROR
oslo_service.service   File
"/usr/lib/python2.7/site-packages/oslo_service/service.py",
line 680, in run_service 2016-10-11
09:08:11.932 30366 ERROR
oslo_service.service    
service.start() 2016-10-11
09:08:11.932 30366 ERROR
oslo_service.service   File
"/usr/lib/python2.7/site-packages/cinder/service.py",
line 150, in start 2016-10-11
09:08:11.932 30366 ERROR
oslo_service.service    
self.manager.init_host() 2016-10-11
09:08:11.932 30366 ERROR
oslo_service.service   File
"/usr/lib/python2.7/site-packages/cinder/volume/manager.py",
line 436, in init_host 2016-10-11
09:08:11.932 30366 ERROR
oslo_service.service    
self.driver.init_capabilities()
2016-10-11 09:08:11.932 30366 ERROR
oslo_service.service   File
"/usr/lib/python2.7/site-packages/cinder/volume/driver.py",
line 656, in init_capabilities
2016-10-11 09:08:11.932 30366 ERROR
oslo_service.service     stats =
self.get_volume_stats(True) 2016-10-11
09:08:11.932 30366 ERROR
oslo_service.service   File
"/usr/lib/python2.7/site-packages/cinder/volume/drivers/emc/scaleio.py",
line 869, in get_volume_stats
2016-10-11 09:08:11.932 30366 ERROR
oslo_service.service    
self._update_volume_stats() 2016-10-11
09:08:11.932 30366 ERROR
oslo_service.service   File
"/usr/lib/python2.7/site-packages/cinder/volume/drivers/emc/scaleio.py",
line 771, in _update_volume_stats
2016-10-11 09:08:11.932 30366 ERROR
oslo_service.service     raise
exception.VolumeBackendAPIException(data=msg)
2016-10-11 09:08:11.932 30366 ERROR
oslo_service.service
VolumeBackendAPIException: Bad or
unexpected response from the storage
volume backend API: Error getting
domain id from name default: Not
found. 2016-10-11 09:08:11.932 30366
ERROR oslo_service.service  2016-10-11
09:08:11.951 23705 INFO
oslo_service.service
[req-d3528c44-cc47-4cfa-bf77-a84e05d31235
- - - - -] Child 30366 exited with status 1 2016-10-11 09:08:11.967 30367
INFO cinder.service [-] Starting
cinder-volume node (version 8.0.0)
2016-10-11 09:08:11.972 30367 INFO
cinder.volume.manager
[req-a2c26f6b-b9aa-4945-9594-c1f3dffc255b
- - - - -] Starting volume driver ScaleIODriver (2.0) 2016-10-11
09:08:11.973 30367 WARNING
cinder.volume.drivers.emc.scaleio
[req-a2c26f6b-b9aa-4945-9594-c1f3dffc255b
- - - - -] Verify certificate is not set, using default of False.
2016-10-11 09:08:11.975 30367 INFO
cinder.volume.drivers.emc.scaleio
[req-a2c26f6b-b9aa-4945-9594-c1f3dffc255b
- - - - -] ScaleIO get domain id by name request:
https://10.10.150.60:443/api/types/Domain/instances/getByName::default.
2016-10-11 09:08:11.976 30367 INFO
cinder.volume.drivers.emc.scaleio
[req-a2c26f6b-b9aa-4945-9594-c1f3dffc255b
- - - - -] username: admin, verify_cert: False. 2016-10-11
09:08:12.005 30367 WARNING py.warnings
[req-a2c26f6b-b9aa-4945-9594-c1f3dffc255b
- - - - -] /usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py:797:
InsecureRequestWarning: Unverified
HTTPS request is being made. Adding
certificate verification is strongly
advised. See:
https://urllib3.readthedocs.org/en/latest/security.html
InsecureRequestWarning ...
(more)
2016-10-13 04:21:54 -0500 commented question packstack installation stuck

Just want to ask, Are you using licensed RHEL 7 ? If yes then could you give it a try testing running packstack for it ?

2016-09-22 08:36:04 -0500 commented question Keystone V3 in Kilo

Please check if this Solution mentioned in the thread works for you. -Thanks

2016-09-22 08:32:33 -0500 commented question Openstack Kilo 3 node setup: Error in Nova launch vm: The server has either erred or is incapable? is this with accessing glance image?

Please share details of nova.conf of controller as well as the compute box. -Thanks

2016-09-17 10:05:47 -0500 edited question No valid host was found. There are not enough hosts available.

I'm a newbie to Openstack and trouble shooting this seem to be a dark art.

I built a Mitaka RDO one node env, everything was working then it just stopped allowing me to create instances...

I did run out of storage but after asking google I now have a 2TB NFS on /var/lib/nova/instances which seems of solved that problem...

Unfortunately this did not fix it.

No valid host was found. There are not enough hosts available.
Code
500
Details
File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 392, in build_instances context, request_spec, filter_properties) File "/usr/lib/python2.7/site-packages/nova/conductor/manager.py", line 436, in _schedule_instances hosts = self.scheduler_client.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/utils.py", line 372, in wrapped return func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 51, in select_destinations return self.queryclient.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/__init__.py", line 37, in __run_method return getattr(self.instance, __name)(*args, **kwargs) File "/usr/lib/python2.7/site-packages/nova/scheduler/client/query.py", line 32, in select_destinations return self.scheduler_rpcapi.select_destinations(context, spec_obj) File "/usr/lib/python2.7/site-packages/nova/scheduler/rpcapi.py", line 121, in select_destinations return cctxt.call(ctxt, 'select_destinations', **msg_args) File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in call retry=self.retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/transport.py", line 90, in _send timeout=timeout, retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 470, in send retry=retry) File "/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 461, in _send raise result
Created
17 Sep 2016, 10:58 a.m.

-

[root@localhost ~(keystone_admin)]# nova service-list
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host                  | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+-----------------------+----------+---------+-------+----------------------------+-----------------+
| 11 | nova-cert        | localhost.localdomain | internal | enabled | up    | 2016-09-17T12:12:06.000000 | -               |
| 12 | nova-consoleauth | localhost.localdomain | internal | enabled | up    | 2016-09-17T12:12:11.000000 | -               |
| 13 | nova-scheduler   | localhost.localdomain | internal | enabled | up    | 2016-09-17T12:12:05.000000 | -               |
| 14 | nova-conductor   | localhost.localdomain | internal | enabled | up    | 2016-09-17T12:12:10.000000 | -               |
| 15 | nova-compute     | localhost.localdomain | nova     | enabled | up    | 2016-09-17T12:12:06.000000 | -               |
+----+------------------+-----------------------+----------+-----

Not sure to start again?

Thanks

Andrew

2016-09-16 05:35:13 -0500 edited question Can not launch instance due to "Host has more disk space than database expected"

Hi,

I am creating and launching instance via HOT file (using execute image) . Here I am facing Host has more disk space than database expected and HTTP exception thrown: Instance could not be found issues. please find the details below.

Could you please help .. Thanks.

**glance image-list**
+--------------------------------------+-----------------+-------------+------------------+-------------+--------+
| ID                                   | Name            | Disk Format | Container Format | Size        | Status |
+--------------------------------------+-----------------+-------------+------------------+-------------+--------+
| be893b06-0cea-4f39-a9e8-c510e8e7dbec | test            | qcow2       | bare             | 1820589568  | active |
| 75e291a4-2c96-418a-bf30-37a70e08f29a | test2           | qcow2       | bare             | 720896      | active |
| 3b8fa3e3-b5ab-4e1a-a964-95a6901d5b1a | execute         | qcow2       | bare             | 13171163136 | active |
+--------------------------------------+-----------------+-------------+------------------+-------------+--------+

-

**nova hypervisor-list**
+----+---------------------+-------+---------+
| ID | Hypervisor hostname | State | Status  |
+----+---------------------+-------+---------+
| 1  | controller          | up    | enabled |
| 2  | node0-5             | up    | enabled |
| 3  | node0-7             | up    | enabled |
| 4  | node0-9             | up    | enabled |
| 5  | node0-11            | up    | enabled |
| 6  | node0-13            | up    | enabled |
| 7  | node0-15            | up    | enabled |
| 8  | node0-17            | up    | enabled |
| 9  | node0-19            | up    | enabled |
| 10 | node0-21            | up    | enabled |
| 11 | node0-23            | up    | enabled |
+----+---------------------+-------+---------+

-

**neutron agent-list**
+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+
| id                                   | agent_type         | host        | alive | admin_state_up | binary                    |
+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+
| 101392f1-9a80-4aa9-abeb-94d300c9d7d3 | DHCP agent         | network     | :-)   | True           | neutron-dhcp-agent        |
| 11299fa6-1e86-4910-a750-5d9df700d411 | Open vSwitch agent | node0-17    | :-)   | True           | neutron-openvswitch-agent |
| 3314610d-e914-49e8-93b9-c06fa2fc318c | Open vSwitch agent | controller  | :-)   | True           | neutron-openvswitch-agent |
| 42153cd0-f64d-4f9c-a15c-00e3f4587e7b | L3 agent           | network     | :-)   | True           | neutron-l3-agent          |
| 46325b01-97c5-4eb6-9ca3-5486997a9df8 | Open vSwitch agent | node0-13    | :-)   | True           | neutron-openvswitch-agent |
| 7a56784c-d8b5-4108-a543-23b6625e677d | Open vSwitch agent | node0-21    | :-)   | True           | neutron-openvswitch-agent |
| 7af98535-0b46-452a-a539-5d7fc41fb0ef | Open vSwitch agent | node0-9     | :-)   | True           | neutron-openvswitch-agent |
| 7b8a4484-8b35-4ad7-9f59-9d4d710c214c | Open vSwitch agent | node0-7     | :-)   | True           | neutron-openvswitch-agent |
| a9f197b2-74e9-466e-a7e3-c4a4ff426601 | Open vSwitch agent | node0-23    | :-)   | True           | neutron-openvswitch-agent |
| b577e390-6042-4ab8-80bf-fb56d59cac85 | Open vSwitch agent | node0-19    | :-)   | True           | neutron-openvswitch-agent |
| c89721ef-d7c4-49a3-99ea-6a520d2bbbb7 | Open vSwitch agent | node0-15    | :-)   | True           | neutron-openvswitch-agent |
| c90a20ad-f8f6-4f92-8228-1fb690584ce4 | Open vSwitch agent | node0-5     | :-)   | True           | neutron-openvswitch-agent |
| da2665db-745f-451a-a769-daa827354586 | Open vSwitch agent | network     | :-)   | True           | neutron-openvswitch-agent |
| e6c106b6-e2db-4e33-9de7-b5ed4a583c55 | Metadata agent     | network     | :-)   | True           | neutron-metadata-agent    |
| f3d8d59d-1815-422c-a6a9-36d501c8465c | Open vSwitch agent | node0-11    | :-)   | True           | neutron-openvswitch-agent |
+--------------------------------------+--------------------+-------------+-------+----------------+---------------------------+

-

**/var/log/nova/nova-api.log**

2016-09-15 08:46:20.304 26264 INFO nova.api.openstack.wsgi [req-9df02999-8c95-4593-a1c1-e37b4982961f None] HTTP exception thrown: Instance could not be found
2016-09-15 08:46:20.305 26264 INFO nova.osapi_compute.wsgi.server [req-9df02999-8c95-4593-a1c1-e37b4982961f None] 10.50.41.100 "GET /v2/508f068670a74efcbc621494535ace5d/servers/56ca6b63-87bb-41fb-a1b7-848dc54368a1 HTTP/1.1" status: 404 len: 267 time: 0.0311298
2016-09-15 08:46:20.311 26262 INFO nova.osapi_compute.wsgi.server [req-886669f2-2b44-4e8b-b657-3b2eda5b9c18 None] 10.50.41.100 "GET /v2/508f068670a74efcbc621494535ace5d/servers/detail?project_id=508f068670a74efcbc621494535ace5d HTTP/1.1" status: 200 len: 6357 time: 0.0647380
2016-09-15 08:46:20.425 26260 INFO nova.api.openstack.wsgi [req-9d5973e0-21c7-4bd0-84ef-7281aabbf20a None] HTTP exception thrown: Instance could not be found
2016-09-15 08:46:20.426 26260 INFO nova.osapi_compute.wsgi.server [req-9d5973e0-21c7-4bd0-84ef-7281aabbf20a None] 10.50.41.100 "GET /v2/508f068670a74efcbc621494535ace5d/servers/ef182996-bcc1-4ca5-a773-1c8f0f9ce8ab HTTP/1.1" status: 404 l

-

**/var/log/nova/nova-conductor.log**

2016-09-15 08:46:17.569 26068 ERROR nova.scheduler.utils [req-212e7d29-0cd5-4c13-941e-902f1e679bd9 None] [instance: 56ca6b63-87bb-41fb-a1b7-848dc54368a1] Error from last host: node0-21 (node node0-21): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2030, in _do_build_and_run_instance\n    filter_properties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2161, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))\n', u'RescheduledException: Build of instance 56ca6b63-87bb-41fb-a1b7-848dc54368a1 was re-scheduled: 404 Not Found\n\nThe resource could not be found.\n\n   \n']
2016-09-15 08:46:17.570 26068 WARNING nova.scheduler.driver [req-212e7d29-0cd5-4c13-941e-902f1e679bd9 None] [instance: 56ca6b63-87bb-41fb-a1b7-848dc54368a1] NoValidHost exception with message: 'No valid host was found. Exceeded max scheduling attempts 3 for ...
(more)
2016-09-16 05:26:19 -0500 edited question nova list command process

I do nova --debug list command. But i find there are three curl request.

DEBUG (session:248) REQ: curl -g -i -X GET http://172.30.0.3:5000/v2.0 -H "Accept: application/json" -H "User-Agent: keystoneauth1/2.4.0 python-requests/2.9.1 CPython/2.7.6"

RESP BODY: 404 Not Found

The resource could not be found.

DEBUG (session:248) REQ: curl -g -i -X GET http://172.30.0.3:8774/v2.1/ -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}ac35cbc92d0f5b71c3c6e83df4b2b25dd0508349"

DEBUG (session:248) REQ: curl -g -i -X GET http://172.30.0.3:8774/v2.1/201c10354c684177abe4d4f612ec4dc0/servers/detail -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-OpenStack-Nova-API-Version: 2.25" -H "X-Auth-Token: {SHA1}ac35cbc92d0f5b71c3c6e83df4b2b25dd0508349"

For the second request, i know it is for finding the right api version, so that is why we have three requests.

i have two questions

  1. who send these three requests? Since it prints on the terminal, I think it may be the keystoneclient...?
  2. If I set the -os-compute-api-version 2.1 or 2.5, it still has these three requests. It confused me...

Any ideas appreciated

Thanks

2016-09-15 12:02:33 -0500 received badge  Nice Answer (source)
2016-09-05 04:53:10 -0500 commented question Connection to glance host http://controller:9292 failed

please check if Existing Threads can resolve your problem

Link1

Link2

2016-09-05 04:49:53 -0500 edited question Connection to glance host http://controller:9292 failed

when i want to Launch an instance on my openstack ,that comes:

Build of instance 687661c9-e010-4cf2-8b5b-e4388a7912a9 aborted: Connection to glance host http://controller:9292 failed: Error finding address for http://controller:9292/v1/images/440721e8-a13f-409b-a2f3-84f6763927a5: HTTPConnectionPool(host='controller',code:500
File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1926, in _do_build_and_run_instance filter_properties) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2083, in _build_and_run_instance 'create.error', fault=e) File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 220, in __exit__ self.force_reraise() File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 196, in force_reraise six.reraise(self.type_, self.value, self.tb) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2067, in _build_and_run_instance instance=instance) File "/usr/lib64/python2.7/contextlib.py", line 35, in __exit__ self.gen.throw(type, value, traceback) File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2244, in _build_resources reason=six.text_type(exc))

-

cat /var/log/glance/registry.log

2016-09-05 14:22:30.694 40910 INFO eventlet.wsgi.server [req-af018f59-3514-41ff-ae44-c2cea5a9709e ca82e5b1f25a447fa930da3f967bd468 b11e9d4379f641bfac833c7ec1d07ecd - - -] 127.0.0.1 - - [05/Sep/2016 14:22:30] "GET /images/440721e8-a13f-409b-a2f3-84f6763927a5 HTTP/1.1" 200 903 0.300862
2016-09-05 14:23:44.702 40911 INFO eventlet.wsgi.server [req-7da779ab-cdf8-4b6b-8936-4d4c7a8fdb18 ca82e5b1f25a447fa930da3f967bd468 b11e9d4379f641bfac833c7ec1d07ecd - - -] 127.0.0.1 - - [05/Sep/2016 14:23:44] "GET /images/detail?sort_key=created_at&sort_dir=desc&limit=1000 HTTP/1.1" 200 3059 0.301075
2016-09-05 14:23:54.638 40909 INFO eventlet.wsgi.server [req-d446408a-f54c-4d56-bfb6-d88cd15e3678 ca82e5b1f25a447fa930da3f967bd468 b11e9d4379f641bfac833c7ec1d07ecd - - -] 127.0.0.1 - - [05/Sep/2016 14:23:54] "GET /images/440721e8-a13f-409b-a2f3-84f6763927a5 HTTP/1.1" 200 903 0.559929

cat /var/log/nova/nova-compute.log

2016-09-05 15:24:14.544 26391 WARNING nova.virt.osinfo [req-8f5276f7-c77f-4be4-8622-3a2ab19ffd55 ca82e5b1f25a447fa930da3f967bd468 b11e9d4379f641bfac833c7ec1d07ecd - - -] Cannot find OS information - Reason: (No configuration information found for operating system Empty)
2016-09-05 15:24:14.796 26391 WARNING nova.virt.osinfo [req-8f5276f7-c77f-4be4-8622-3a2ab19ffd55 ca82e5b1f25a447fa930da3f967bd468 b11e9d4379f641bfac833c7ec1d07ecd - - -] Cannot find OS information - Reason: (No configuration information found for operating system Empty)
2016-09-05 15:24:14.798 26391 INFO nova.virt.libvirt.driver [req-8f5276f7-c77f-4be4-8622-3a2ab19ffd55 ca82e5b1f25a447fa930da3f967bd468 b11e9d4379f641bfac833c7ec1d07ecd - - -] [instance: 7a2a8e8f-ce69-4088-ab5b-af3a248778e6] Creating image
2016-09-05 15:24:14.806 26391 ERROR nova.image.glance [req-8f5276f7-c77f-4be4-8622-3a2ab19ffd55 ca82e5b1f25a447fa930da3f967bd468 b11e9d4379f641bfac833c7ec1d07ecd - - -] Error contacting glance server 'http://controller:9292' for 'data', done trying.
2016-09-05 15:24:14.806 26391 ERROR nova.image.glance Traceback (most recent call last):
2016-09-05 15:24:14.806 26391 ERROR nova.image.glance   File "/usr/lib/python2.7/site-packages/nova/image/glance.py", line 250, in call
2016-09-05 15:24:14.806 26391 ERROR nova.image.glance     result = getattr(client.images, method)(*args, **kwargs)
2016-09-05 15:24:14.806 26391 ERROR nova.image.glance   File "/usr/lib/python2.7/site-packages/glanceclient/v1/images.py", line 148, in data
2016-09-05 15:24:14.806 26391 ERROR nova.image.glance     % urlparse.quote(str(image_id)))
2016-09-05 15:24:14.806 26391 ERROR nova.image.glance   File "/usr/lib/python2.7/site-packages/glanceclient/common/http.py", line 275, in get
2016-09-05 15:24:14.806 26391 ERROR nova.image.glance     return ...
(more)
2016-09-05 00:45:58 -0500 edited question No valid host was found. There are not enough hosts available.

this is my /var/log/nova/nova-conductor.log:

2016-09-04 22:20:16.775 11514 INFO nova.service [-] Starting conductor node (version 13.1.0-1.el7)
2016-09-04 22:24:25.974 11510 WARNING nova.scheduler.utils [req-3b1cd184-426b-4345-b0f4-904c6a891ea0 ca82e5b1f25a447fa930da3f967bd468 b11e9d4379f641bfac833c7ec1d07ecd - - -] Failed to compute_task_build_instances: No valid host was found. There are not enough hosts available.
Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py", line 150, in inner
    return func(*args, **kwargs)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/manager.py", line 104, in select_destinations
    dests = self.driver.select_destinations(ctxt, spec_obj)

  File "/usr/lib/python2.7/site-packages/nova/scheduler/filter_scheduler.py", line 74, in select_destinations
    raise exception.NoValidHost(reason=reason)

NoValidHost: No valid host was found. There are not enough hosts available.

2016-09-04 22:24:25.975 11510 WARNING nova.scheduler.utils [req-3b1cd184-426b-4345-b0f4-904c6a891ea0 ca82e5b1f25a447fa930da3f967bd468 b11e9d4379f641bfac833c7ec1d07ecd - - -] [instance: e2ae1198-7275-47ce-98dd-32d781bb7382] Setting instance to ERROR state.

and that is my /var/log/nova/nova-scheduler.log:

2016-09-04 22:44:20.801 11562 INFO nova.scheduler.host_manager [req-a6fe2d89-13e8-43b3-ba66-43b646483c27 - - - - -] Successfully synced instances from host 'compute1'.
2016-09-04 22:44:35.772 11562 INFO nova.scheduler.host_manager [req-b65e3555-c03d-4d57-8f15-5b918ef9bd46 - - - - -] Successfully synced instances from host 'compute2'.
2016-09-04 22:46:23.810 11562 INFO nova.scheduler.host_manager [req-be52ff65-b4fb-4c21-9963-02b8c481bc2f - - - - -] Successfully synced instances from host 'compute1'.
2016-09-04 22:46:39.440 11562 INFO nova.scheduler.host_manager [req-a02933dd-b993-49bc-a15a-b93edbc2a4cc - - - - -] Successfully synced instances from host 'compute2'.
2016-09-04 22:48:23.808 11562 INFO nova.scheduler.host_manager [req-ebf59fd3-e53c-4e44-b230-a4be16375548 - - - - -] Successfully synced instances from host 'compute1'.
2016-09-04 22:48:41.451 11562 INFO nova.scheduler.host_manager [req-ad5b58b7-603a-4845-91e3-9c1da18eba51 - - - - -] Successfully synced instances from host 'compute2'.
2016-09-04 22:50:28.803 11562 INFO nova.scheduler.host_manager [req-03f98b33-d20b-4d78-afde-7555eb321591 - - - - -] Successfully synced instances from host 'compute1'.
2016-09-04 22:50:43.443 11562 INFO nova.scheduler.host_manager [req-215f9f4b-61e2-4bad-8ce3-ba51bf57e3c5 - - - - -] Successfully synced instances from host 'compute2'.

-

nova service-list:
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | controller | internal | enabled | up    | 2016-09-04T14:53:11.000000 | -               |
| 3  | nova-scheduler   | controller | internal | enabled | up    | 2016-09-04T14:53:08.000000 | -               |
| 4  | nova-consoleauth | controller | internal | enabled | up    | 2016-09-04T14:53:11.000000 | -               |
| 16 | nova-compute     | compute1   | zone1    | enabled | up    | 2016-09-04T14:53:04.000000 | -               |
| 17 | nova-compute     | compute2   | zone2    | enabled | up    | 2016-09-04T14:53:12.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

[root@controller ~]#  neutron agent-list
+----------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| id                   | agent_type         | host       | availability_zone | alive | admin_state_up | binary                 |
+----------------------+--------------------+------------+-------------------+-------+----------------+------------------------+
| 5b76221c-9b0d-457e-  | Linux bridge agent | controller |                   | :-)   | True           | neutron-linuxbridge-   |
| 974d-ddca228feb36    |                    |            |                   |       |                | agent                  |
| 6518485c-d910-46f3   | L3 agent           | controller | nova              | :-)   | True           | neutron-l3-agent       |
| -b67a-78dc00601433   |                    |            |                   |       |                |                        |
| 89d79bc7-c7a5-40f1-8 | Linux bridge agent | compute1   |                   | :-)   | True           | neutron-linuxbridge-   |
| ac0-f648fc39872f     |                    |            |                   |       |                | agent                  |
| c02049ba-129c-4548-8 | Linux bridge agent | compute2   |                   | :-)   | True           | neutron-linuxbridge-   |
| d32-fd3ca2900731     |                    |            |                   |       |                | agent                  |
| cfef2c97-ee09-4c64-a | DHCP agent         | controller | nova              | :-)   | True           | neutron-dhcp-agent     |
| 042-7f1673ee85df     |                    |            |                   |       |                |                        |
| d6e0d782-8a80-411b-a | Metadata agent     | controller |                   | :-)   | True           | neutron-metadata-agent |
| 098-70becdaaa69e     |                    |            |                   |       |                |                        |
+----------------------+--------------------+------------+-------------------+-------+----------------+------------------------+

cat /var/log/nova/nova-compute.log

2016-09-04 23:43:46.585 9017 ERROR nova.compute.manager [req-20e25c15-cc8d-412a-b516-cb9e0c045a6f - - - - -] No compute node record for host compute1
2016-09-04 23:43:46.590 9017 INFO nova.compute.resource_tracker [req-20e25c15-cc8d-412a-b516-cb9e0c045a6f - - - - -] Auditing locally available compute resources for node compute1
2016-09-04 23:43:46.858 9017 ERROR nova.compute.manager [req-20e25c15-cc8d-412a-b516-cb9e0c045a6f - - - - -] Error updating resources for node compute1.
2016-09-04 23:43:46.858 9017 ERROR nova.compute.manager Traceback (most recent call last):
2016-09-04 23:43:46.858 9017 ERROR nova.compute.manager   File "/usr ...
(more)
2016-09-02 23:48:48 -0500 edited question nova image-create fails for taking snapshot

Open stack Version Kilo

OS RHEL

Issue with both command line as well as dashboard

#  nova --debug image-create --poll test123 test123snap

 DEBUG (session:195) REQ: curl -g -i -X GET http://auth:5000/v3 -H "Accept: application/json" -H "User-Agent: python-keystoneclient"
INFO (connectionpool:203) Starting new HTTP connection (1): auth
DEBUG (connectionpool:383) "GET /v3 HTTP/1.1" 200 245
DEBUG (session:224) RESP: [200] content-length: 245 vary: X-Auth-Token keep-alive: timeout=5, max=100 server: Apache/2.4.6 (Red Hat Enterprise Linux) mod_wsgi/3.4 Python/2.7.5 connection: Keep-Alive date: Fri, 02 Sep 2016 10:25:37 GMT content-type: application/json x-openstack-request-id: req-18f190bf-932a-4e0e-8a8c-b6376240b9db
RESP BODY: {"version": {"status": "stable", "updated": "2015-03-30T00:00:00Z", "media-types": [{"base": "application/json", "type": "application/vnd.openstack.identity-v3+json"}], "id": "v3.4", "links": [{"href": "http://auth:5000/v3/", "rel": "self"}]}}

DEBUG (base:171) Making authentication request to http://auth:5000/v3/auth/tokens
DEBUG (connectionpool:383) "POST /v3/auth/tokens HTTP/1.1" 201 13122
DEBUG (session:195) REQ: curl -g -i -X GET http://proxy:8774/v2/513056986b69432880c9fa829983b46d/servers?name=test123 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}4b97bd307851c6602ccfc4594cbe69b4119ebef1"
INFO (connectionpool:203) Starting new HTTP connection (1): proxy
DEBUG (connectionpool:383) "GET /v2/513056986b69432880c9fa829983b46d/servers?name=test123 HTTP/1.1" 200 386
DEBUG (session:224) RESP: [200] date: Fri, 02 Sep 2016 10:25:37 GMT connection: keep-alive content-type: application/json content-length: 386 x-compute-request-id: req-a394a3d4-574c-47f0-b242-0dc5bfec34dc
RESP BODY: {"servers": [{"id": "92c2fd37-8ea2-46a5-ac4a-bf6b2a82f5e6", "links": [{"href": "http://proxy:8774/v2/513056986b69432880c9fa829983b46d/servers/92c2fd37-8ea2-46a5-ac4a-bf6b2a82f5e6", "rel": "self"}, {"href": "http://proxy:8774/513056986b69432880c9fa829983b46d/servers/92c2fd37-8ea2-46a5-ac4a-bf6b2a82f5e6", "rel": "bookmark"}], "name": "test123"}]}

DEBUG (session:195) REQ: curl -g -i -X GET http://proxy:8774/v2/513056986b69432880c9fa829983b46d/servers/92c2fd37-8ea2-46a5-ac4a-bf6b2a82f5e6 -H "User-Agent: python-novaclient" -H "Accept: application/json" -H "X-Auth-Token: {SHA1}4b97bd307851c6602ccfc4594cbe69b4119ebef1"
DEBUG (connectionpool:383) "GET /v2/513056986b69432880c9fa829983b46d/servers/92c2fd37-8ea2-46a5-ac4a-bf6b2a82f5e6 HTTP/1.1" 200 1743
DEBUG (session:224) RESP: [200] date: Fri, 02 Sep 2016 10:25:38 GMT connection: keep-alive content-type: application/json content-length: 1743 x-compute-request-id: req-e54b5335-cdae-487e-b000-c770097bfa39
RESP BODY: {"server": {"status": "ACTIVE", "updated": "2016-09-02T10:19:31Z", "hostId": "34ad73e9f940b02c9aee24b832cfd4112536e7217e4590e7c24357c1", "addresses": {"ext-relay": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b3:1a:af", "version": 4, "addr": "192.0.0.22", "OS-EXT-IPS:type": "fixed"}], "pub-relay": [{"OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:34:18:c4", "version": 4, "addr": "172.16.0.8", "OS-EXT-IPS:type": "fixed"}]}, "links": [{"href": "http://proxy:8774/v2/513056986b69432880c9fa829983b46d/servers/92c2fd37-8ea2-46a5-ac4a-bf6b2a82f5e6", "rel": "self"}, {"href": "http://proxy:8774/513056986b69432880c9fa829983b46d/servers/92c2fd37-8ea2-46a5-ac4a-bf6b2a82f5e6", "rel": "bookmark"}], "key_name": "dalauth", "image": {"id": "9722a9aa-d8c2-4dfa-8e2f-57a5a2124a2d", "links": [{"href": "http://proxy:8774/513056986b69432880c9fa829983b46d/images/9722a9aa-d8c2-4dfa-8e2f-57a5a2124a2d", "rel": "bookmark"}]}, "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-SRV-USG:launched_at": "2016-09-01T10:58:28.000000", "flavor": {"id": "1", "links": [{"href": "http://proxy:8774/513056986b69432880c9fa829983b46d/flavors/1", "rel": "bookmark"}]}, "id": "92c2fd37-8ea2-46a5-ac4a-bf6b2a82f5e6", "security_groups": [{"name": "default"}, {"name": "default"}], "OS-SRV-USG:terminated_at": null, "OS-EXT-AZ:availability_zone": "dal10", "user_id": "116af8b9592f48e68b7e992c7784c39c", "name": "test123", "created": "2016-09-01T10:58:12Z", "tenant_id": "513056986b69432880c9fa829983b46d", "OS-DCF:diskConfig": "AUTO", "os-extended-volumes:volumes_attached": [], "accessIPv4": "", "accessIPv6": "", "progress": 0, "OS-EXT-STS:power_state": 1, "config_drive": "", "metadata": {}}}

DEBUG (session:195) REQ: curl -g -i -X POST http://proxy:8774/v2/513056986b69432880c9fa829983b46d/servers/92c2fd37-8ea2-46a5-ac4a-bf6b2a82f5e6/action -H "User-Agent: python-novaclient" -H "Content-Type: application/json" -H "Accept: application/json ...
(more)
2016-09-02 23:42:51 -0500 answered a question How enable & use LBaaS v2.0 instead of LBaaS v1.0 in Packstack?

Seems, Its default v1 showing in mitaka. Below are the suggested changes to get LBaas v2.0 running for packstack, but before making any changes, Please take backup of the files which you are making changes.

In /etc/neutron/neutron.conf comment the existing line for service_plugins and add below line.

service_plugins = neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2,neutron.services.l3_router.l3_router_plugin.L3RouterPlugin

In /etc/neutron/neutron_lbaas.conf if this file is not there, create it. Comment the existing line for service_provider and add below line.

service_provider = LOADBALANCERV2:Haproxy:neutron_lbaas.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

In /etc/neutron/lbaas_agent.ini, comment the existing line for device_driver and add below line.

device_driver = neutron_lbaas.drivers.haproxy.namespace_driver.HaproxyNSDriver

Then, You need run below commands.

# neutron-db-manage --service lbaas upgrade head
# systemctl disable neutron-lbaas-agent.service
# systemctl stop neutron-lbaas-agent.service
# systemctl restart neutron-server.service
# systemctl enable neutron-lbaasv2-agent.service
# systemctl start neutron-lbaasv2-agent.service

Verify that lbaasv2 agent is running.

ps -ef | grep 'neutron-lbaasv2'  |grep -v grep

Then you need to create Loadbalancer using LbaaSv2, listener in that Loadbalancer, pool in that listener

Reference: Please refer to this article

2016-09-02 22:52:22 -0500 commented question How enable & use LBaaS v2.0 instead of LBaaS v1.0 in Packstack?

But Document mentioned LBaas v1 is deprecated in Liberty, So ideally v2 should be default in mitaka.

-> LBaaS v1: introduced in Juno (deprecated in Liberty)

2016-09-02 01:41:01 -0500 edited question no communication between internal and external network
my ext network  => 192.168.52.0/24  gw 192.168.52.2
router gateway ip => 192.168.52.3

internal network => 192.168.100.0/24 gw 192.168.100.1

error is , router gateway is down to external network.(it's juno three node setup) i am able to ping to 192.168.52.3 from outside network. but not able to ping 192.168.52.2 there is no communication between both network.

i executed route from router namespace it shows

Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.52.2    0.0.0.0         UG    0      0        0 qg-10f01db4-62
192.168.52.0    0.0.0.0         255.255.255.0   U     0      0        0 qg-10f01db4-62
192.168.100.0   0.0.0.0         255.255.255.0   U     0      0        0 qr-5382da6d-a1

thing is , from within router namespace , iam not able to ping 192.168.52.2. and i am able to ping 192.168.52.3 and all 100 series ip too.

can nyone tell me how to solve this issue. thanks

2016-09-02 01:39:49 -0500 commented question nova-docker setup in devstack

No Module name docker, could you check this link

2016-09-01 23:39:41 -0500 commented question Cannot launch instance with cinder snapshot!!

There can be issues like Timeout or Flavor mismatch. Could you please share steps which you followed for snapshot & booting instance? My Concern - are you passing parameter " --image-id" with "cinder create" command.

Cinder makes a volume bootable when --image-id parameter is passed.

2016-09-01 15:12:51 -0500 commented answer Converted to raw, but format is now qcow2 ????

I dont think, it should have any problems with block live migration.

It would be better if you open another thread with full details, i.e what steps you have followed and what error message you are facing with logs. -Thanks

2016-09-01 14:43:31 -0500 answered a question Converted to raw, but format is now qcow2 ????

So, The error message you reported, mostly comes when you downloaded the ready cloud images with img extention and convert it into qcow2, when they are already having that format. Here are details of Openstack Disk and Container formats

For Example:

# qemu-img info RHEL64.img
image: RHEL64.img
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 136K
cluster_size: 65536

You can directly upload that ready cloud image file to Glance, no need to convert. You can refer this documentation

#glance image-create --name='RHEL64-TEST' --is-public=true --container-format=bare --disk-format=qcow2 < RHEL64.img

Please let us know if this solution works.

2016-09-01 10:49:38 -0500 answered a question How to launch an aws ec2 instance in openstack's auto-scaling group?? (How to create a hybrid cloud?)

To start with,You can check below pointers.

There is a support for ec2-api for openstack. This is - AWS EC2 and VPC API support in standalone service for OpenStack. Check here ec2-api

Next, You also need to check euca2ools client Eucalyptus client API tools, this is compatible with Amazon Web Services, such as EC2 and IAM. This can also be used with Openstack

Please also check this thread

2016-09-01 08:23:04 -0500 answered a question Neutron Visualization

You are looking for Skydive - Skydive - Real time Network Analyser?

And here is the summit presentation video

2016-09-01 07:17:03 -0500 commented question Migration attached volume between cinder pools

Have you checked if "attached" volume migration in cinder is still supported yet?

2016-09-01 07:11:50 -0500 commented question nova-docker setup in devstack

Two things(But i am not sure) 1) functions-common:is_service_enabled:2055 returning 0. Make sure "nova compute service" is running.

2) Try adding the 'stack' user to the 'docker' group & then run again. Check ownership of files at location : /opt/stack/devstack/lib/nova_plugins/hypervisor-docker

2016-09-01 06:24:58 -0500 commented question nova-docker setup in devstack

Please follow this link It also gives details on local.conf.

After installing, if you face any more issues, Please reply to the thread.

2016-09-01 00:32:12 -0500 commented question Converted to raw, but format is now qcow2 ????

From where you had downloaded the img file ? Share what exact command steps you followed to convert the image & then for uploading it to glance.

Also, please share the output of "qemu-img info" command

#qemu-img info xyz.img
2016-08-31 13:31:40 -0500 edited question Why could not find Zone during floating ip assignment for instance?
2016-08-31 05:50:02.671 6381 INFO designate.policy [req-683fe344-3c5c-46d1-83e3-edb573965077 - - - - -] Policy check succeeded for rule 'all_tenants' on target {}
2016-08-31 05:50:02.672 6381 INFO designate.central.rpcapi [req-683fe344-3c5c-46d1-83e3-edb573965077 - - - - -] get_zone: Calling central's get_zone.
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher [req-683fe344-3c5c-46d1-83e3-edb573965077 - - - - -] Exception during message handling
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher Traceback (most recent call last):
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/notify/dispatcher.py", line 86, in _dispatch_and_handle_error
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     return self._dispatch(incoming)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/notify/dispatcher.py", line 122, in _dispatch
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     ret = self._exec_callback(callback, filtered_messages)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/notify/dispatcher.py", line 169, in _exec_callback
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     messages[0]["metadata"])
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/notify/dispatcher.py", line 129, in _exec_callback
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     ret = callback(*args)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/sink/service.py", line 116, in info
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     handler.process_notification(context, event_type, payload)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/notification_handler/neutron.py", line 76, in process_notification
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     resource_type='floatingip')
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/notification_handler/base.py", line 127, in _create
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     zone = self.get_zone(zone_id)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/notification_handler/base.py", line 65, in get_zone
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     return self.central_api.get_zone(context, zone_id)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/central/rpcapi.py", line 155, in get_zone
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     return self.client.call(context, 'get_zone', zone_id=zone_id)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 413, in call
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     return self.prepare().call(ctxt, method, **kwargs)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in call
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     retry=self.retry)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr ...
(more)
2016-08-31 13:27:28 -0500 edited question Why could not find Zone during floating ip assignment?
2016-08-31 05:50:02.671 6381 INFO designate.policy [req-683fe344-3c5c-46d1-83e3-edb573965077 - - - - -] Policy check succeeded for rule 'all_tenants' on target {}
2016-08-31 05:50:02.672 6381 INFO designate.central.rpcapi [req-683fe344-3c5c-46d1-83e3-edb573965077 - - - - -] get_zone: Calling central's get_zone.
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher [req-683fe344-3c5c-46d1-83e3-edb573965077 - - - - -] Exception during message handling
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher Traceback (most recent call last):
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/notify/dispatcher.py", line 86, in _dispatch_and_handle_error
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     return self._dispatch(incoming)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/notify/dispatcher.py", line 122, in _dispatch
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     ret = self._exec_callback(callback, filtered_messages)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/notify/dispatcher.py", line 169, in _exec_callback
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     messages[0]["metadata"])
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/notify/dispatcher.py", line 129, in _exec_callback
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     ret = callback(*args)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/sink/service.py", line 116, in info
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     handler.process_notification(context, event_type, payload)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/notification_handler/neutron.py", line 76, in process_notification
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     resource_type='floatingip')
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/notification_handler/base.py", line 127, in _create
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     zone = self.get_zone(zone_id)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/notification_handler/base.py", line 65, in get_zone
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     return self.central_api.get_zone(context, zone_id)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/designate/central/rpcapi.py", line 155, in get_zone
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     return self.client.call(context, 'get_zone', zone_id=zone_id)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 413, in call
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     return self.prepare().call(ctxt, method, **kwargs)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/client.py", line 158, in call
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher     retry=self.retry)
2016-08-31 05:50:02.685 6381 ERROR oslo_messaging.notify.dispatcher   File "/usr ...
(more)
2016-08-31 13:25:54 -0500 edited question Cannot launch instance with cinder snapshot!!

I can create instance snapshot using cinder, but when I try launching an instance with that image or volume it fails. I get the below error

**Unable to clear device ID for port 'None'**:
"*Booting with volume None at /dev/vda
2016-08-31 19:48:00.259 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Auditing locally available compute resources for node compute2.local
2016-08-31 19:48:05.079 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Total usable vcpus: 4, total allocated vcpus: 1
2016-08-31 19:48:05.080 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Final resource view: name=compute2.local phys_ram=7817MB used_ram=13312MB phys_disk=17GB used_disk=25GB total_vcpus=4 used_vcpus=1 pci_stats=<nova.pci.stats.PciDeviceStats object at 0x40760d0>
2016-08-31 19:48:05.182 1050 INFO nova.scheduler.client.report [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Compute_service record updated for ('compute2.local', 'compute2.local')
2016-08-31 19:48:05.183 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Compute_service record updated for compute2.local:compute2.local
2016-08-31 19:49:00.261 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Auditing locally available compute resources for node compute2.local
2016-08-31 19:49:04.764 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Total usable vcpus: 4, total allocated vcpus: 1
2016-08-31 19:49:04.767 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Final resource view: name=compute2.local phys_ram=7817MB used_ram=13312MB phys_disk=17GB used_disk=25GB total_vcpus=4 used_vcpus=1 pci_stats=<nova.pci.stats.PciDeviceStats object at 0x40760d0>
2016-08-31 19:49:04.844 1050 INFO nova.scheduler.client.report [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Compute_service record updated for ('compute2.local', 'compute2.local')
2016-08-31 19:49:04.846 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Compute_service record updated for compute2.local:compute2.local
2016-08-31 19:49:30.476 1050 WARNING nova.compute.manager [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] While synchronizing instance power states, found 11 instances in the database and 10 instances on the hypervisor.
2016-08-31 19:50:01.259 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Auditing locally available compute resources for node compute2.local
2016-08-31 19:50:06.639 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Total usable vcpus: 4, total allocated vcpus: 1
2016-08-31 19:50:06.640 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Final resource view: name=compute2.local phys_ram=7817MB used_ram=13312MB phys_disk=17GB used_disk=25GB total_vcpus=4 used_vcpus=1 pci_stats=<nova.pci.stats.PciDeviceStats object at 0x40760d0>
2016-08-31 19:50:06.719 1050 INFO nova.scheduler.client.report [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Compute_service record updated for ('compute2.local', 'compute2.local')
2016-08-31 19:50:06.720 1050 INFO nova.compute.resource_tracker [req-95c282be-dd67-4a79-abb9-9943c66304b1 - - - - -] Compute_service record updated for compute2.local:compute2.local
2016-08-31 19:50:59.850 1050 ERROR nova.compute.manager [req-00236f05-c3db-4a9e-8f4b-c0fea04a51f4 0d6bc3d3319644979f324bf0245e6134 0d19360f16a545ed9d2f6762dabde3a5 - - -] [instance: 82a055d2-5ad9-4a7d-8414-db88140bf31f] Instance failed block device setup
2016-08-31 19:50:59.850 1050 TRACE nova.compute.manager [instance: 82a055d2-5ad9-4a7d-8414-db88140bf31f] Traceback (most recent call last):
2016-08-31 19:50:59.850 1050 TRACE nova.compute.manager [instance: 82a055d2-5ad9-4a7d-8414-db88140bf31f]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1974, in _prep_block_device
2016-08-31 19:50:59.850 1050 TRACE nova.compute.manager [instance: 82a055d2-5ad9-4a7d-8414-db88140bf31f]     do_check_attach=do_check_attach) +
2016-08-31 19:50:59.850 1050 TRACE nova.compute ...
(more)
2016-08-31 09:29:03 -0500 answered a question How do i add a new resource in nova api ?

So, I think this is still a Proposed Question on Model resources as objects. But the status in blue print shows as "Deferred"

Also, You can also check this link

2016-08-30 15:50:19 -0500 edited question Kilo/CentOs7 Snapshot creation fails with a glanceglient exception on the nova compute node

I am unable to create snapshots of instances on select nova compute nodes. I receive an error in the nova compute log on the hypervisor complaining about an attribute error. It appears that the galanceclient is having trouble handling the message. Looking on line I noticed a bug that was reported related to the urllib3 version and the glanceclient, please see https://bugs.launchpad.net/glance0/+bug/1476770 (https://bugs.launchpad.net/glance0/+b...).

I downgraded the urllib3 to version 1.10.2 and my python-glanceclient is 0.17.0 but the exception remains. I have another hypervisor that was built a few months earlier with the same versions that creates snapshots with no exception. Any help would be greatly appreciated. I have attached the error below.

Thank you in advance,

2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher Traceback (most recent call last):
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 142, in _dispatch_and_reply
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     executor_callback))
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 186, in _dispatch
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     executor_callback)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_messaging/rpc/dispatcher.py", line 130, in _do_dispatch
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     result = func(ctxt, **new_args)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 6933, in snapshot_instance
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     return self.manager.snapshot_instance(ctxt, image_id, instance)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 88, in wrapped
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     payload)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/exception.py", line 71, in wrapped
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     return f(self, context, *args, **kw)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 333, in decorated_function
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     LOG.warning(msg, e, instance_uuid=instance_uuid)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/oslo_utils/excutils.py", line 85, in __exit__
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher     six.reraise(self.type_, self.value, self.tb)
2016-08-30 15:28:49.009 31521 TRACE oslo_messaging.rpc.dispatcher ...
(more)
2016-08-30 11:58:39 -0500 edited answer Cinder down Failed to initialize driver. lvm

Figured out the cause of the issue, vgs was giving an error on a missing volume

 Missing volume XXXXXXXX
  VG             #PV #LV #SN Attr   VSize   VFree 
  cinder-volumes   1   1   0 wz--n- 30.26g 27.12g

So because vgs reported an error line on top of the output cinder could not read the proper information from vgs output and that causes not to load the driver.

2016-08-30 10:30:52 -0500 edited question How to configure transparent mode for load balancer in mitaka?

I have created LB+Autoscaling on a subnet, LB is configured to receive and balance TCP packets on port 3306 (though this is mysql port), From another subnet, I am sending TCP packets to LB IP address on port 3306. Here, LB is forwarding packets to pool members, but the source IP address is LB’s IP address when we see the packet in pool member machines.

Here is my haproxy.cfg config details: ‘10.0.3.212’ is my LB VIP. Please let me know what configuration I need to do, so that source IP address is not modified.

global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin
        stats timeout 30s


defaults
        log     global
        mode    tcp
        option  tcplog

frontend tcpserver
        bind    10.0.3.212:3306 transparent
        mode tcp
        default_backend nodes

backend nodes
        mode    tcp
        option tcplog
        option ssl-hello-chk
        option http-keep-alive
        balance roundrobin
        source 0.0.0.0 usesrc clientip
        stick-table type ip size 20k expire 15m
        stick on src
        server web01 10.0.3.213:3306 check
        server web02 10.0.3.215:3306 check
        server web03 10.0.3.216:3306 check
        server web04 10.0.3.217:3306 check
        server web05 10.0.3.218:3306 check
        server web06 10.0.3.219:3306 check
        server web07 10.0.3.220:3306 check
        server web08 10.0.3.222:3306 check
        server web09 10.0.3.223:3306 check
        server web010 10.0.3.224:3306 check
        timeout server 1m
        timeout connect 1m
        timeout client 5m

Please help on this?

2016-08-29 05:01:13 -0500 edited question trove instance creation fail

I'm deploying three node architecture with Openstack Mitaka, and having trouble with Trove. I tried to make Trove instance, and instance's status remains in BUILD, and doesn't proceed from that point. After a long period of time, status turns to ERROR, showing this error on trove-taskmanager. I can't delete the instance until it turns to error. However on dashboard I see an ACTIVE instance on project>>compute>>instance tab, on project>database tab, it shows same status as trove-list command on controller:BUILD or ERROR. Is there any help on this issue? where should I be looking into associated with this problem?

#trove list
+--------------------------------------+------------------+-----------+-------------------+--------+-----------+------+
| ID                                   | Name             | Datastore | Datastore Version | Status | Flavor ID | Size |
+--------------------------------------+------------------+-----------+-------------------+--------+-----------+------+
| 0280f460-d3ba-44ca-9a6a-134449f7d4f8 | mysql_instance_4 | mysql     | mysql-5.6         | BUILD  | 6         |    5 |
| 40cb9527-59f7-4cfe-997c-e23300bc6ba1 | test1            | mysql     | mysql-5.6         | BUILD  | 6         |    5 |
| a92e31a8-54ed-445f-a153-17aeeb401b57 | test4            | mysql     | mysql-5.6         | BUILD  | 6         |    5 |
+--------------------------------------+------------------+-----------+-------------------+--------+-----------+------+

#tail /var/log/trove/trove-taskmanage.log

2016-08-17 16:19:02.921 22971 DEBUG trove.guestagent.api [-] Sending the call to prepare the Guest. prepare /usr/lib/python2.7/dist-packages/trove/guestagent/api.
py:235
2016-08-17 16:19:02.954 22971 DEBUG trove.guestagent.api [-] Casting prepare _cast /usr/lib/python2.7/dist-packages/trove/guestagent/api.py:77
2016-08-17 16:19:02.955 22971 DEBUG oslo_messaging._drivers.amqpdriver [-] CAST unique_id: 1abf9a25ae044f90aa5df5267608f5e5 exchange 'trove' topic 'guestagent.db0
5cab4-555c-4566-a7f0-16b363a91652' _send /usr/lib/python2.7/dist-packages/oslo_messaging/_drivers/amqpdriver.py:454
2016-08-17 16:19:02.957 22971 INFO trove.instance.models [-] Resetting task status to NONE on instance db05cab4-555c-4566-a7f0-16b363a91652.
2016-08-17 16:19:02.962 22971 DEBUG trove.db.models [-] Saving DBInstance: {u'cluster_id': None, u'shard_id': None, u'deleted_at': None, u'id': u'db05cab4-555c-45
66-a7f0-16b363a91652', u'datastore_version_id': u'38e939c7-98cd-47f7-a4f1-ee3efdadff67', 'errors': {}, u'hostname': None, u'server_status': None, u'task_descripti
on': 'No tasks for the instance.', u'volume_size': 5, u'type': None, u'updated': datetime.datetime(2016, 8, 17, 7, 19, 2, 962620), '_sa_instance_state': <sqlalche
my.orm.state.InstanceState object at 0x7fafc1276090>, u'deleted': 0, u'configuration_id': None, u'volume_id': u'321c1b65-cbd3-4009-afc2-7571da91d879', u'slave_of_
id': None, u'task_start_time': None, u'name': u'test2', u'task_id': 1, u'created': datetime.datetime(2016, 8, 17, 7, 18, 57), u'tenant_id': u'fda14487a68f436da7b5
5fe218798ea2', u'compute_instance_id': u'884975a2-4e48-4392-bc40-9e1f166e54bc', u'flavor_id': u'6'} save /usr/lib/python2.7/dist-packages/trove/db/models.py:62
2016-08-17 16:19:02.995 22971 DEBUG trove.taskmanager.models [-] trove dns support = False _create_dns_entry /usr/lib/python2.7/dist-packages/trove/taskmanager/mo
dels.py:942
2016-08-17 16:19:02.995 22971 DEBUG trove.taskmanager.models [-] <greenlet.greenlet object at 0x7fafc14367d0>: DNS not enabled for instance: db05cab4-555c-4566-a7
f0-16b363a91652 _create_dns_entry /usr/lib/python2.7/dist-packages/trove/taskmanager/models.py:983
2016-08-17 16:34:03.213 22971 ERROR oslo.service.loopingcall [-] Fixed interval looping call 'trove.common.utils.poll_and_check' failed
2016-08-17 16:34:03.213 22971 ERROR oslo.service.loopingcall Traceback (most recent call last):
2016-08-17 16:34:03.213 22971 ERROR oslo.service.loopingcall   File "/usr/lib/python2.7/dist-packages/oslo_service/loopingcall.py", line 136, in _run_loop
2016-08-17 16:34:03.213 22971 ERROR oslo.service ...
(more)
2016-08-28 13:54:02 -0500 commented question How to list projects in multi domain in OpenStack kilo?

Could you try running this below command and check the output?

openstack --os-identity-api-version 3 project list
2016-08-28 13:45:42 -0500 edited question How to list projects in multi domain in OpenStack kilo?

Can anyone please give me some pointer how can I accomplish the below commands in multi-domain environment ?

Brief about my environment:

============================

1) Single Keystone server (Keystone+Horizon) which multiple environmensts 2) Each environment consist of 1 Controller(without Keystone & Horizon), 1 compute, 1 neutron node. 3) All the enpoints for a perticular environment have been segrigated by only it's region name.

My Question requiremnt:

===========================

1) What is the command to list all the projects under a specific domain/region ? 2) What is the command to list all the domains/regions under that keystone ?

============================================================

Sourcing admin/default openrc file <which is="" perfectly="" working="" fine="">

[root@controller ~]# cat admin-openrc.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://controller:35357/v3

[root@controller ~]# source admin-openrc.sh
[root@controller ~]# openstack project list
+----------------------------------+---------+
| ID                               | Name    |
+----------------------------------+---------+
| 3b292a1ddc894872ba860b47844084ab | admin   |
| 2109d63d31e346b49d6bf97593d5309b | demo    |
| 0311873dc2e44972a8b173dd1cf4c23d | service |
+----------------------------------+---------+

[root@controller ~]# openstack endpoint list
+----------------------------------+--------------+--------------+----------------+
| ID                               | Region       | Service Name | Service Type   |
+----------------------------------+--------------+--------------+----------------+
| 566f27a4e4904320bba1269a81f18720 | region-lon02  | nova         | compute        |
| 638761a7bae24dc29e50cc462bbde8f2 | region-lon02  | neutron      | network        |
| a90e573951ca4402bb36bdc893193c1f | region-sjc03  | nova         | compute        |
| d1251350f27e4efaafe525a64071ed95 | region-lon02  | heat-cfn     | cloudformation |
| ee027c99712741e9bd5a78424738791d | region-dal10 | neutron      | network        |
| 925aaee59ae64342930b4f5a36c44982 | region-sjc03  | glance       | image          |
| 246ad7ac4aca4f33b7f29bc52903598f | region-dal10 | heat-cfn     | cloudformation |
| e68682d95c534fb18234b23323ea7ea9 | region-sjc03  | heat-cfn     | cloudformation |
| f4529089637a49ba8e913df62c0fb155 | region-dal09  | neutron      | network        |
| 752f67ea7b9e46b0b8da9a54833886dc | region-dal09  | nova         | compute        |
| afee75c2efb7433383708630024e1f74 | region-dal10 | ceilometer   | metering       |
| a68a2e0fa87e4f17a4448abc6b2c8c25 | RegionOne    | keystone     | identity       |
| cf135772c60342799a28c0a081f18be5 | region-sjc03  | heat         | orchestration  |
| 54bc8101d2f24794a4a095493e29e4de | region-dal10 | glance       | image          |
| e06b704881ee49f88f10f6bf759581a4 | region-dal10 | nova         | compute        |
| 20c6734391ca4a5997d933aca278d610 | region-sjc03  | neutron      | network        |
| cdb633c2cfe4405b9fc35104c311c721 | region-lon02  | heat         | orchestration  |
| 706ebd175b944822a9b88c3df3e32e8e | region-dal10 | heat         | orchestration  |
| 3071b3cfb3754941a5987bd1cb22b141 | region-dal09  | glance       | image          |
| 1d510f33485a46ddb11b8dcfb19825d9 | region-lon02  | glance       | image          |
+----------------------------------+--------------+--------------+----------------+

==============================================================================

Sourcing Openrc file for a perticular domain/region, rest of the commands like nova/neutron are working fine

[root@controller ~]# openstack domain list
ERROR: openstack Unknown command ['domain', 'list']

[root@controller ~]# cat admin-openrc-region-lon02.sh
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password
export OS_AUTH_URL=http://controller:35357/v3
export OS_REGION_NAME=region-lon02


[root@controller ~]# source admin-openrc-region-lon02.sh
[root@controller ~]# openstack project list
ERROR: openstack The resource could not be found. (HTTP 404) (Request-ID: req-5c79c2e3-9be1-4280-8e8c-d0dcccb94e75)

====================================================================