Ask Your Question

Bindas's profile - activity

2018-11-27 04:09:03 -0600 received badge  Self-Learner (source)
2018-05-20 01:05:40 -0600 received badge  Famous Question (source)
2018-05-17 16:31:34 -0600 received badge  Notable Question (source)
2018-05-17 09:40:08 -0600 received badge  Popular Question (source)
2017-12-17 22:46:26 -0600 commented question segregating neutron

Hi Bernd , thanks for the reply but actually we achieve the configuration by segregating using regions. the reason behind this architecture was so we can have two separate SDN controller , openvswitch and cplane

2017-12-17 22:43:22 -0600 received badge  Famous Question (source)
2017-12-12 15:12:01 -0600 received badge  Notable Question (source)
2017-12-12 15:12:01 -0600 received badge  Popular Question (source)
2017-12-11 13:37:59 -0600 asked a question segregating neutron

hi

currently we are trying to deploy a openstack architecture where there is two different networking architecture. one is with native neutron with OVS and other is cplane networking where only neutron api is running but all the other neutron components are turned off i.e neutron-ovs-agent neutron-metadata-agent etc. we are planning to install openstack mikata version of openstack using juju on ubuntu 14.04LTS.

what i am looking for is recommendations to achieve this architecture , should i segregate openstack controllers using regions or its ok with using cells, or availability zones. also it would be very helpful if you can provide any resources where i can read up on this type of deployment a bit more. I found a similar question regarding this type of deployment but unfortunately there was no answer provided. any guidance or help would be appreciated :)

thanks

2017-01-23 21:53:49 -0600 asked a question boot instance and save it on multi storage backend

Hi

currently we have multiple storage backend , ceph and solidfire configured in cinder. by default the instances disk file is being saved in ceph, what can i do to save it in solidfire ?

2016-05-23 16:41:32 -0600 received badge  Famous Question (source)
2016-05-05 02:08:36 -0600 received badge  Notable Question (source)
2016-05-04 04:17:49 -0600 received badge  Enthusiast
2016-05-04 02:25:47 -0600 received badge  Popular Question (source)
2016-05-03 12:33:58 -0600 answered a question mariadb connection aborted

currently this is my Galera configuration

listen galera_cluster
  bind VIP:3306
  mode tcp
  balance  source
  option  httpchk
  option tcplog
  server mariadb01 ip1:3306 rise 2 fall 5
  server mariadb02 ip2:3306 backup rise 2 fall 5
  server mariadb03 ip3:3306 backup rise 2 fall 5
2016-05-03 12:31:16 -0600 commented answer mariadb connection aborted

currently this is my galera configuration (see below). do you see any problems what i have currently?

2016-05-02 22:04:07 -0600 asked a question mariadb connection aborted

Hi

i have setup openstack liberty with 3 node HA with haproxy, pacemaker , mariadb galera cluster on ubuntu 14.01 LTS. currently iam seeing alot of warning messages in mariadb error logs every time there is traffic going through open stack. every day or two i get "mysql connection is lost" in random open stack service , mainly in neutron. below are the errors im seeing in mariadb error logs. these warning messages are created almost every time traffic goes through openstack. any help will be appreciated

    160502 22:41:23 [Warning] Aborted connection 652 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:24 [Warning] Aborted connection 655 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:24 [Warning] Aborted connection 656 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:24 [Warning] Aborted connection 657 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:24 [Warning] Aborted connection 654 to db: 'nova' user: 'nova' host: 'haproxy02' (Unknown error)
160502 22:41:25 [Warning] Aborted connection 658 to db: 'nova' user: 'nova' host: 'haproxy02' (Unknown error)
160502 22:41:25 [Warning] Aborted connection 659 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:25 [Warning] Aborted connection 660 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:25 [Warning] Aborted connection 661 to db: 'nova' user: 'nova' host: 'haproxy02' (Unknown error)
160502 22:41:26 [Warning] Aborted connection 664 to db: 'nova' user: 'nova' host: 'haproxy02' (Unknown error)
160502 22:41:26 [Warning] Aborted connection 666 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:26 [Warning] Aborted connection 667 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:26 [Warning] Aborted connection 668 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:26 [Warning] Aborted connection 669 to db: 'nova' user: 'nova' host: 'haproxy02' (Unknown error)
160502 22:41:26 [Warning] Aborted connection 662 to db: 'nova' user: 'nova' host: 'haproxy02' (Unknown error)
160502 22:41:27 [Warning] Aborted connection 651 to db: 'keystone' user: 'keystone' host: 'haproxy02' (Unknown error)
160502 22:41:27 [Warning] Aborted connection 670 to db: 'nova' user: 'nova' host: 'haproxy02' (Unknown error)
160502 22:41:28 [Warning] Aborted connection 674 to db: 'nova' user: 'nova' host: 'haproxy02' (Unknown error)
160502 22:41:54 [Warning] Aborted connection 678 to db: 'keystone' user: 'keystone' host: 'haproxy02' (Unknown error)
160502 22:41:54 [Warning] Aborted connection 679 to db: 'keystone' user: 'keystone' host: 'haproxy02' (Unknown error)
160502 22:41:54 [Warning] Aborted connection 663 to db: 'keystone' user: 'keystone' host: 'haproxy02' (Unknown error)
160502 22:41:54 [Warning] Aborted connection 572 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:55 [Warning] Aborted connection 573 to db: 'neutron' user: 'neutron' host: 'haproxy02' (Unknown error)
160502 22:41:55 [Warning] Aborted connection 680 to db: 'nova' user: 'nova' host: 'haproxy02' (Unknown error)
160502 22:41:55 [Warning] Aborted ...
(more)
2016-02-09 11:18:13 -0600 received badge  Famous Question (source)
2015-10-27 11:27:45 -0600 received badge  Notable Question (source)
2015-10-07 02:54:17 -0600 received badge  Teacher (source)
2015-10-07 02:54:17 -0600 received badge  Self-Learner (source)
2015-10-06 15:01:17 -0600 received badge  Popular Question (source)
2015-10-06 14:04:31 -0600 received badge  Scholar (source)
2015-10-06 14:03:55 -0600 answered a question openstack api fail 3-5 time out of 10

I was able to find the root cause for this issue, i just basically double my timeout setting for the cluster in my.cnf file and it now its not giving me http 500 error anymore.

2015-10-06 14:03:06 -0600 commented question openstack api fail 3-5 time out of 10

i couldnt find any log files for mysql , i think its mainly due the configuration issue.

2015-10-05 10:45:19 -0600 commented question openstack api fail 3-5 time out of 10

some reason i couldnt add code to the comment so check above, as you can see it doesnt give out much info.

2015-10-05 10:43:21 -0600 received badge  Editor (source)
2015-10-03 16:48:13 -0600 asked a question openstack api fail 3-5 time out of 10

we have deployed openstack juno release on Ubuntu 14.04.3 LTS. we have three controller nodes with rabbitmq and galera installed on the controller node as well. we have setup two separate haproxy nodes and the Galera is behind the vip, as well as other openstack endpoints but not rabbitmq.

when i run keystone token-get command, about 3 to 5 times it fails out of 10 times. below is the output. i am currently confused that should put galera behind a vip or not. i am not sure where to start looking into this issue, I think its a mariadb issue but not sure. any help would be great.

Thanks.

    for i in $(seq 1 10); do keystone token-get;done
Authorization Failed: An unexpected error prevented the server from fulfilling your request. (HTTP 500)
Authorization Failed: An unexpected error prevented the server from fulfilling your request. (HTTP 500)
+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
|  expires  |       2015-10-03T22:26:06Z       |
|     id    | fb9c8032b27e434fa3d6a4646ef8e972 |
| tenant_id | cc5c94dbdf684551ba10b0b5c22c1908 |
|  user_id  | a60b3894adf54394a0e0bc72228789bc |
+-----------+----------------------------------+
+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
|  expires  |       2015-10-03T22:26:07Z       |
|     id    | ef65d53914c24144b91e1fc573ee2e85 |
| tenant_id | cc5c94dbdf684551ba10b0b5c22c1908 |
|  user_id  | a60b3894adf54394a0e0bc72228789bc |
+-----------+----------------------------------+
+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
|  expires  |       2015-10-03T22:26:07Z       |
|     id    | 60c6d22805a749e2a96d4cc2193ec0f0 |
| tenant_id | cc5c94dbdf684551ba10b0b5c22c1908 |
|  user_id  | a60b3894adf54394a0e0bc72228789bc |
+-----------+----------------------------------+
+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
|  expires  |       2015-10-03T22:26:07Z       |
|     id    | cabc4d73bb6f4d9894c2bcc77ba3a2a1 |
| tenant_id | cc5c94dbdf684551ba10b0b5c22c1908 |
|  user_id  | a60b3894adf54394a0e0bc72228789bc |
+-----------+----------------------------------+
+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
|  expires  |       2015-10-03T22:26:08Z       |
|     id    | 0bccd20c9e3c4421bdc150c156811aee |
| tenant_id | cc5c94dbdf684551ba10b0b5c22c1908 |
|  user_id  | a60b3894adf54394a0e0bc72228789bc |
+-----------+----------------------------------+
Authorization Failed: An unexpected error prevented the server from fulfilling your request. (HTTP 500)
+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
|  expires  |       2015-10-03T22:26:09Z       |
|     id    | f2c928afe7544d6392ac17c32d53dea1 |
| tenant_id | cc5c94dbdf684551ba10b0b5c22c1908 |
|  user_id  | a60b3894adf54394a0e0bc72228789bc |
+-----------+----------------------------------+
+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
|  expires  |       2015-10-03T22:26:09Z       |
|     id    | 19604e7048ae4cb68bf30de032c9fc7c |
| tenant_id | cc5c94dbdf684551ba10b0b5c22c1908 |
|  user_id  | a60b3894adf54394a0e0bc72228789bc |
+-----------+----------------------------------+

some reason i couldnt had command output to comments so im putting here. this is the output of keystone --debug token-get

DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://172.16.100.100:35357/v2.0/tokens
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 172.16.100.100
DEBUG:urllib3.connectionpool:Setting read timeout to 600.0
DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 500 143
DEBUG:keystoneclient.session:Request returned failure status: 500

this is good one

DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://172.16.100.100:35357/v2.0/tokens
INFO:urllib3.connectionpool:Starting new HTTP connection (1): 172.16.100.100
DEBUG:urllib3.connectionpool:Setting read timeout to 600.0
DEBUG:urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 1471
+-----------+----------------------------------+
|  Property |              Value               |
+-----------+----------------------------------+
|  expires  |       2015-10-05T16:37:06Z       |
|     id    | 305f67047d1e4a27a4a7e945eb346da5 |
| tenant_id | cc5c94dbdf684551ba10b0b5c22c1908 |
|  user_id  | a60b3894adf54394a0e0bc72228789bc |
+-----------+----------------------------------+

I went to check the keystone logs and seems like keystone is disconnecting in the backend with the galera cluster. can can anyone help with this issue ?

2015-10-05 15:37:05.680 12364 ERROR keystone.common.wsgi [-] (OperationalError) (2006, 'MySQL server has gone away') 'SELECT user.id AS user_id, user.name AS user_name, user.domain_id AS user_domain_id, user.password AS user_password, user.enabled AS user_enabled, user.extra AS user_extra, user.default_project_id AS user_default_project_id \nFROM user \nWHERE user.name = %s AND user.domain_id = %s' ('admin', 'default')
2015-10-05 15:37:05.680 12364 TRACE keystone.common.wsgi     self.errorhandler(self, exc, value)
2015-10-05 15:37:05.680 12364 ...
(more)
2015-06-04 15:09:00 -0600 received badge  Famous Question (source)
2015-04-10 14:02:03 -0600 received badge  Notable Question (source)
2015-04-10 14:02:03 -0600 received badge  Popular Question (source)
2015-03-18 10:30:02 -0600 asked a question Rally throwing errors for heat testing.

Hello Openstackers,

I am running into an issue where i am doing rally testing for heat using create-and-delete-stack.yaml that is provided in scenario testing in heat folder. below you can see i am starting a rally task and its failing due to ("ERROR: Missing required credential: roles ['heat_stack_owner']",) but when i do keystone role-list i am able to see heat_stack_owner role there, not sure what exactly i am doing wrong but any help would be useful. btw i am following this guide https://www.mirantis.com/blog/5-steps...

root@devstack:~/rally/samples/tasks/scenarios/heat# rally task start create-and-delete-stack.yaml
--------------------------------------------------------------------------------
 Preparing input task
--------------------------------------------------------------------------------

Input task is:
---
  HeatStacks.create_and_delete_stack:
    -
      runner:
        type: "constant"
        times: 10
        concurrency: 2
      context:
        users:
          tenants: 2
          users_per_tenant: 3

--------------------------------------------------------------------------------
 Task  1058f0c1-94e8-4b8d-8a21-16bcf365f276: started
--------------------------------------------------------------------------------

Benchmarking... This can take a while...

To track task status use:

        rally task status
        or
        rally task detailed


--------------------------------------------------------------------------------
Task 1058f0c1-94e8-4b8d-8a21-16bcf365f276: finished
--------------------------------------------------------------------------------

test scenario HeatStacks.create_and_delete_stack
args position 0
args values:
OrderedDict([(u'runner', OrderedDict([(u'type', u'constant'), (u'concurrency', 2), (u'times', 10)])), (u'context', OrderedDict([(u'users', OrderedDict([(u'project_domain', u'default'), (u'users_per_tenant', 3), (u'tenants', 2), (u'resource_management_workers', 30), (u'user_domain', u'default')]))]))])
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| action | min (sec) | avg (sec) | max (sec) | 90 percentile | 95 percentile | success | count |
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
| total  | n/a       | n/a       | n/a       | n/a           | n/a           | 0.0%    | 10    |
+--------+-----------+-----------+-----------+---------------+---------------+---------+-------+
Load duration: 2.85287499428
Full duration: 8.18649196625

HINTS:
* To plot HTML graphics with this data, run:
        rally task report 1058f0c1-94e8-4b8d-8a21-16bcf365f276 --out output.html

* To get raw JSON output of task results, run:
        rally task results 1058f0c1-94e8-4b8d-8a21-16bcf365f276

Using task: 1058f0c1-94e8-4b8d-8a21-16bcf365f276
root@devstack:~/rally/samples/tasks/scenarios/heat# rally task results 1058f0c1-94e8-4b8d-8a21-16bcf365f276
[
    {
        "full_duration": 8.186491966247559,
        "key": {
            "kw": {
                "context": {
                    "users": {
                        "project_domain": "default",
                        "resource_management_workers": 30,
                        "tenants": 2,
                        "user_domain": "default",
                        "users_per_tenant": 3
                    }
                },
                "runner": {
                    "concurrency": 2,
                    "times": 10,
                    "type": "constant"
                }
            },
            "name": "HeatStacks.create_and_delete_stack",
            "pos": 0
        },
        "load_duration": 2.852874994277954,
        "result": [
            {
                "atomic_actions": {
                    "heat.create_stack": null
                },
                "duration": 0.625485897064209,
                "error": [
                    "HTTPBadRequest",
                    "ERROR: Missing required credential: roles ['heat_stack_owner']",
                    "Traceback (most recent call last):\n  File \"/usr/local/lib/python2.7/dist-packages/rally/benchmark/runners/base.py\", line 79, in _run_scenario_once\n    method_name)(**kwargs) or scenario_output\n  File \"/usr/local/lib/python2.7/dist-packages/rally/benchmark/scenarios/heat/stacks.py\", line 83, in create_and_delete_stack\n    stack = self._create_stack(template)\n  File \"/usr/local/lib/python2.7/dist-packages/rally/benchmark/scenarios/base.py\", line 255, in func_atomic_actions\n    f = func(self, *args, **kwargs)\n  File \"/usr/local/lib/python2.7/dist-packages/rally/benchmark/scenarios/heat/utils.py\", line 114, in _create_stack\n    stack_id = self.clients(\"heat\").stacks.create(**kw)[\"stack\"][\"id\"]\n  File \"/usr/local/lib/python2.7/dist-packages/heatclient/v1/stacks.py\", line 134, in create\n    data=kwargs, headers=headers)\n  File \"/usr/local/lib/python2.7/dist-packages/heatclient/common/http.py\", line 266, in json_request\n    resp = self._http_request(url, method, **kwargs)\n  File \"/usr/local/lib/python2.7/dist-packages/heatclient/common/http.py\", line 221, in _http_request\n    raise exc.from_response(resp)\nHTTPBadRequest: ERROR: Missing required credential: roles ['heat_stack_owner']\n"
                ],
                "idle_duration": 0,
                "scenario_output": {
                    "data": {},
                    "errors": ""
                },
                "timestamp": 1426658891.068377 ...
(more)
2014-11-10 06:30:26 -0600 received badge  Notable Question (source)
2014-11-10 06:30:26 -0600 received badge  Famous Question (source)
2014-10-13 08:38:25 -0600 received badge  Popular Question (source)
2014-10-10 05:10:08 -0600 received badge  Student (source)
2014-10-08 11:09:55 -0600 asked a question tempest all object storage test failing RestClientException

hello

I am having a issue with running tempest on object storage. it works fine for other openstack components expect for object storage. below is the detials for my tempest.conf for object storage.

[object-storage]
catalog_type = object-store
region = xxxx
container_sync_timeout = 120
container_sync_interval = 120
accounts_quotas_available = True
container_quotas_available = True
operator_role = admin

below is the output from tempest run. i have change the endpoint url so i dont show my public url.

ks@M400005Tempest-API-Testing:~/tempest_old/tempest/Tempest_Automated_Test$ nosetests -v /home/ks/tempest_old/tempest/api/object_storage/test_object_expiry.pyc
ERROR

======================================================================
ERROR: test suite for <class 'tempest.api.object_storage.test_object_expiry.ObjectExpiryTest'>
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 208, in run
    self.setUp()
  File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 291, in setUp
    self.setupContext(ancestor)
  File "/usr/lib/python2.7/dist-packages/nose/suite.py", line 314, in setupContext
    try_run(context, names)
  File "/usr/lib/python2.7/dist-packages/nose/util.py", line 478, in try_run
    return func()
  File "/home/ks/tempest_old/tempest/api/object_storage/test_object_expiry.py", line 33, in setUpClass
    cls.container_client.create_container(cls.container_name)
  File "/home/ks/tempest_old/tempest/services/object_storage/container_client.py", line 47, in create_container
    resp, body = self.put(url, body=None, headers=headers)
  File "/home/ks/tempest_old/tempest/common/rest_client.py", line 314, in put
    return self.request('PUT', url, headers, body)
  File "/home/ks/tempest_old/tempest/common/rest_client.py", line 436, in request
    resp, resp_body)
  File "/home/ks/tempest_old/tempest/common/rest_client.py", line 527, in _error_checker
    raise exceptions.RestClientException(str(resp.status))
RestClientException: An unknown exception occurred
Details: An unknown exception occurred
Details: 411
-------------------- >> begin captured stdout << ---------------------
self.image.client
config:<tempest.config.TempestConfig instance at 0x2e75a70>, user:ksadmin, password:AAPcloud123, auth_url:https://api.aaptest.cci.blah.com:5000/v2.0/, tenant_name:admin, auth_version:v1.0, service:image, base_url:None, region:{'compute': 'aaptest-region1', 'network': 'aaptest-region1', 'image': 'aaptest-region1', 'volume': 'aaptest-region1', 'orchestration': 'aaptest-region1', 'object-store': 'aaptest-region1', 'identity': 'aaptest-region1'}, endpoint_url:publicURL, build_interval:10, build_timeout:600
token:None...,
headers:{'Content-Type': 'application/json', 'Accept': 'application/json'}...
self.image.client_v2
config:<tempest.config.TempestConfig instance at 0x2e75a70>, user:ksadmin, password:AAPcloud123, auth_url:https://api.aaptest.cci.blah.com:5000/v2.0/, tenant_name:admin, auth_version:v1.0, service:image, base_url:None, region:{'compute': 'aaptest-region1', 'network': 'aaptest-region1', 'image': 'aaptest-region1', 'volume': 'aaptest-region1', 'orchestration': 'aaptest-region1', 'object-store': 'aaptest-region1', 'identity': 'aaptest-region1'}, endpoint_url:publicURL, build_interval:10, build_timeout:600
token:None...,
headers:{'Content-Type': 'application/json', 'Accept': 'application/json'}...
self.image.client
config:<tempest.config.TempestConfig instance at 0x2e75a70>, user:tempest_demouser_4, password:secret, auth_url:https://api.aaptest.cci.blah.com:5000/v2.0/, tenant_name:tempest_test2, auth_version:v1.0, service:image, base_url:None, region:{'compute': 'aaptest-region1', 'network': 'aaptest-region1', 'image': 'aaptest-region1', 'volume': 'aaptest-region1', 'orchestration': 'aaptest-region1', 'object-store': 'aaptest-region1', 'identity': 'aaptest-region1'}, endpoint_url:publicURL, build_interval:10, build_timeout:600
token:None...,
headers:{'Content-Type': 'application/json', 'Accept': 'application/json'}...
self.image.client_v2
config:<tempest.config.TempestConfig instance at 0x2e75a70>, user:tempest_demouser_4, password:secret, auth_url:https://api.aaptest.cci.blah.com:5000/v2.0/, tenant_name:tempest_test2, auth_version:v1.0, service:image, base_url:None, region:{'compute': 'aaptest-region1', 'network': 'aaptest-region1', 'image': 'aaptest-region1', 'volume': 'aaptest-region1', 'orchestration ...
(more)