Ask Your Question

jkary's profile - activity

2015-01-30 05:27:56 -0500 received badge  Student (source)
2015-01-23 03:28:10 -0500 received badge  Taxonomist
2014-06-09 22:22:41 -0500 received badge  Famous Question (source)
2014-05-23 09:07:17 -0500 received badge  Notable Question (source)
2014-05-10 06:46:18 -0500 received badge  Popular Question (source)
2014-05-10 06:05:28 -0500 answered a question Connection lost after adding ovs-vsctl add-port br-ex INTERFACE_NAME

How many NICs are you using? It looks like you are adding you mgmt link to br-ex and not you public network link.

Jason

2014-05-10 05:58:56 -0500 answered a question How to setup ML2 Plugin , OVS agent, Neutron for IceHouse on CentOS 6.5 ?

Dan,

Have you tried following the manual? http://docs.openstack.org/icehouse/install-guide/install/yum/content/ (http://docs.openstack.org/icehouse/in...)

I've had reasonalble success using CentOS 6.4 using this guide.

Jason

2014-05-09 13:27:20 -0500 asked a question Error while processing VIF ports -- RemoteError: Remote error: OperationalError (OperationalError) no such table:

Hi Folks,

I am running CentOS 6.4/ICEHOUSE/3 node setup. Has anyone encountered this error before on their compute node:

2014-05-09 14:20:12.871 2001 ERROR neutron.plugins.openvswitch.agent.ovs_neutron_agent [-] Error while processing VIF ports
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent Traceback (most recent call last):
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1226, in rpc_loop
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     sync = self.process_network_ports(port_info)
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/plugins/openvswitch/agent/ovs_neutron_agent.py", line 1069, in process_network_ports
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     port_info.get('updated', set()))
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py", line 247, in setup_port_filters
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     self.prepare_devices_filter(new_devices)
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py", line 161, in prepare_devices_filter
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     self.context, list(device_ids))
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/agent/securitygroups_rpc.py", line 86, in security_group_rules_for_devices
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     topic=self.topic)
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/proxy.py", line 125, in call
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     result = rpc.call(context, real_topic, msg, timeout)
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/__init__.py", line 112, in call
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     return _get_impl().call(CONF, context, topic, msg, timeout)
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/impl_qpid.py", line 784, in call
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     rpc_amqp.get_connection_pool(conf, Connection))
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py", line 575, in call
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     rv = list(rv)
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent   File "/usr/lib/python2.6/site-packages/neutron/openstack/common/rpc/amqp.py", line 540, in __iter__
2014-05-09 14:20:12.871 2001 TRACE neutron.plugins.openvswitch.agent.ovs_neutron_agent     raise result
2014-05-09 14:20:12.871 2001 TRACE neutron ...
(more)
2014-05-02 00:19:24 -0500 received badge  Famous Question (source)
2014-05-01 10:00:10 -0500 received badge  Enthusiast
2014-04-30 08:54:02 -0500 answered a question Icehouse + RedHat Deployment Bug?

Turning on DEBUG reveals some more information. Can anyone tell me if this is the result of the compute node not being registered properly?

2014-04-30 09:45:06.403 2449 INFO nova.api.openstack [req-98a87c05-5b69-4a78-975c-a1870db5979b ca8cf7879f304f328420a023aa47d821 cc55e4c69e404fc4b89e2983a36efd80] http://nmtg-ctrl001:8774/v2/cc55e4c69e404fc4b89e2983a36efd80/servers returned with HTTP 500
2014-04-30 09:45:06.404 2449 DEBUG nova.api.openstack.wsgi [req-98a87c05-5b69-4a78-975c-a1870db5979b ca8cf7879f304f328420a023aa47d821 cc55e4c69e404fc4b89e2983a36efd80] Returning 500 to user: The server has either erred or is incapable of performing the requested operation. __call__ /usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:1215
2014-04-30 09:45:06.404 2449 INFO nova.osapi_compute.wsgi.server [req-98a87c05-5b69-4a78-975c-a1870db5979b ca8cf7879f304f328420a023aa47d821 cc55e4c69e404fc4b89e2983a36efd80] 10.81.76.105 "POST /v2/cc55e4c69e404fc4b89e2983a36efd80/servers HTTP/1.1" status: 500 len: 335 time: 60.1515419
2014-04-30 09:45:06.462 2449 DEBUG keystoneclient.middleware.auth_token [-] Authenticating user token __call__ /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:602
2014-04-30 09:45:06.462 2449 DEBUG keystoneclient.middleware.auth_token [-] Removing headers from request environment: X-Identity-Status,X-Domain-Id,X-Domain-Name,X-Project-Id,X-Project-Name,X-Project-Domain-Id,X-Project-Domain-Name,X-User-Id,X-User-Name,X-User-Domain-Id,X-User-Domain-Name,X-Roles,X-Service-Catalog,X-User,X-Tenant-Id,X-Tenant-Name,X-Tenant,X-Role _remove_auth_headers /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:661
2014-04-30 09:45:06.463 2449 DEBUG keystoneclient.middleware.auth_token [-] Returning cached token _cache_get /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1010
2014-04-30 09:45:06.463 2449 DEBUG keystoneclient.middleware.auth_token [-] Received request from user: ca8cf7879f304f328420a023aa47d821 with project_id : cc55e4c69e404fc4b89e2983a36efd80 and roles: admin,_member_  _build_user_headers /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:909
2014-04-30 09:45:06.465 2449 DEBUG routes.middleware [-] Matched GET /cc55e4c69e404fc4b89e2983a36efd80/servers/detail __call__ /usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py:100
2014-04-30 09:45:06.465 2449 DEBUG routes.middleware [-] Route path: '/{project_id}/servers/detail', defaults: {'action': u'detail', 'controller': <nova.api.openstack.wsgi.Resource object at 0x4471610>} __call__ /usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py:102
2014-04-30 09:45:06.465 2449 DEBUG routes.middleware [-] Match dict: {'action': u'detail', 'controller': <nova.api.openstack.wsgi.Resource object at 0x4471610>, 'project_id': u'cc55e4c69e404fc4b89e2983a36efd80'} __call__ /usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py:103
2014-04-30 09:45:06.466 2449 DEBUG nova.api.openstack.wsgi [req-8cad30be-8cbd-4dd6-8439-3b87ba41066e ca8cf7879f304f328420a023aa47d821 cc55e4c69e404fc4b89e2983a36efd80] Calling method '<bound method Controller.detail of <nova.api.openstack.compute.servers.Controller object at 0x446da10>>' (Content-type='None', Accept='application/json') _process_stack /usr/lib/python2.6/site-packages/nova/api/openstack/wsgi.py:945
2014-04-30 09:45:06.466 2449 DEBUG nova.compute.api [req-8cad30be-8cbd-4dd6-8439-3b87ba41066e ca8cf7879f304f328420a023aa47d821 cc55e4c69e404fc4b89e2983a36efd80] Searching by: {'deleted': False, u'project_id': u'cc55e4c69e404fc4b89e2983a36efd80', u'limit': u'21'} get_all /usr/lib/python2.6/site-packages/nova/compute/api.py:1839
2014-04-30 09:45:06.489 2449 INFO nova.osapi_compute.wsgi.server [req-8cad30be-8cbd-4dd6-8439-3b87ba41066e ca8cf7879f304f328420a023aa47d821 cc55e4c69e404fc4b89e2983a36efd80] 10.81.76.105 "GET /v2/cc55e4c69e404fc4b89e2983a36efd80/servers/detail?limit=21&project_id=cc55e4c69e404fc4b89e2983a36efd80 HTTP/1.1" status: 200 len: 187 time: 0.0271010
2014-04-30 09:45:06.628 2449 DEBUG keystoneclient.middleware.auth_token [-] Authenticating user token __call__ /usr/lib/python2.6/site-packages/keystoneclient/middleware ...
(more)
2014-04-30 07:46:20 -0500 commented answer Icehouse + RedHat Deployment Bug?

I installed this from scratch using the RedHat Deployment document located on the http://OpenStack.org site.

I checked my topology versions and found the compute and controller node were both running ver. 1 while the network node did not have a version property found in the neutron configuration files.

2014-04-30 03:27:27 -0500 received badge  Notable Question (source)
2014-04-30 00:53:42 -0500 received badge  Popular Question (source)
2014-04-29 15:30:07 -0500 asked a question Icehouse + RedHat Deployment Bug?

Hello,

I'm wondering if I've hit a bug or have a configuration error on my 3 node deployment. All services start up and the message queue appears to be working. When I launch an image I get the following:

2014-04-29 16:16:49.723 2252 ERROR nova.api.openstack [req-a5503226-ad62-47fd-9ce2-9bd0b3c79d54 ca8cf7879f304f328420a023aa47d821 cc55e4c69e404fc4b89e2983a36efd80] Caught error: Timed out waiting for a reply to message ID 1c8a38a6a601410981cc630ddefa6346
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack Traceback (most recent call last):
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/nova/api/openstack/__init__.py", line 125, in __call__
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     return req.get_response(self.application)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/webob/request.py", line 1296, in send
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     application, catch_exc_info=False)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/webob/request.py", line 1260, in call_application
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     app_iter = application(self.environ, start_response)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/webob/dec.py", line 144, in __call__
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     return resp(environ, start_response)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py", line 615, in __call__
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     return self.app(env, start_response)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/webob/dec.py", line 144, in __call__
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     return resp(environ, start_response)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/webob/dec.py", line 144, in __call__
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     return resp(environ, start_response)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/Routes-1.12.3-py2.6.egg/routes/middleware.py", line 131, in __call__
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     response = self.app(environ, start_response)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/webob/dec.py", line 144, in __call__
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     return resp(environ, start_response)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/webob/dec.py", line 130, in __call__
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     resp = self.call_func(req, *args, **self.kwargs)
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack   File "/usr/lib/python2.6/site-packages/webob/dec.py", line 195, in call_func
2014-04-29 16:16:49.723 2252 TRACE nova.api.openstack     return self ...
(more)
2014-04-20 20:02:27 -0500 received badge  Famous Question (source)
2014-03-31 00:55:29 -0500 received badge  Notable Question (source)
2014-03-31 00:40:29 -0500 received badge  Popular Question (source)
2014-03-13 09:11:17 -0500 commented answer WARNING keystone.common.wsgi [-] Authorization failed. Could not find user, glance. from 127.0.0.1

Thank you! I ran these commands once before however they must not have executed correctly. Appreciate the timely response. Jason

2014-03-13 08:26:46 -0500 asked a question WARNING keystone.common.wsgi [-] Authorization failed. Could not find user, glance. from 127.0.0.1

Hello,

I am following the OpenStack Installation Guide for Red Hat Enterprise Linux, CentOS, and Fedora - havana and I've stumbled across an issue. In section 4 "Verify image service installation", I am unable to upload an image into glance.

# glance image-create --name="CirrOS 0.3.1" --disk-format=qcow2 --container-format=bare --is-public=true < cirros-0.3.1-x86_64-disk.img

Request returned failure status.

Invalid OpenStack Identity credentials.

I've been searching through the logs to find some message which might point me in the right direction. The glance log shows:

DEBUG keystoneclient.middleware.auth_token [-] Marking token c5252c018c094343ac2ffeed91355f4d as unauthorized in memcache _cache_store_invalid /usr/lib/python2.6/site-packages/keystoneclient/middleware/auth_token.py:1068

So it looks like keystone doesn't like glance. When I turn on debug in keystone I get this hint:

WARNING keystone.common.wsgi [-] Authorization failed. Could not find user, glance. from 127.0.0.1

So I am now confused. The mysql database has the user:

# mysql -u glance -p* glance

Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 16112 Server version: 5.1.61 Source distribution

Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql>

Is there a config setting I've missed?

Thanks for your time!

Jason