Ask Your Question

jscheponik's profile - activity

2016-02-19 00:11:20 -0600 received badge  Famous Question (source)
2016-02-19 00:11:20 -0600 received badge  Notable Question (source)
2015-09-16 15:22:54 -0600 received badge  Self-Learner (source)
2015-08-27 09:05:12 -0600 received badge  Self-Learner (source)
2015-08-27 09:04:59 -0600 received badge  Popular Question (source)
2015-08-27 09:03:56 -0600 answered a question (KILO) Horizon Dashboard throwing Ceilometer errors

ceilometer fixed. upgraded python-ceilometerclient from v1.0.13 to v1.0.14. Bug Info. seems they merged the fix, but hasn't made it into kilo/stable yet.

2015-08-25 13:48:08 -0600 commented answer Issues with Swift Proxy Service

I created the folder that was missing and added permissions to it for swift:swift

2015-08-25 11:18:00 -0600 asked a question (KILO) Horizon Dashboard throwing Ceilometer errors

This is a vanilla build, following the standard instructions found here. Currently running into an issue with the Horizon dashboard not showing resource data or even finding metrics. From the command line, Ceilometer is working as expected:

[root@usewdcld001 api]# ceilometer meter-list
    +---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
    | Name                            | Type       | Unit      | Resource ID                                                           | User ID                          | Project ID                       |
    +---------------------------------+------------+-----------+-----------------------------------------------------------------------+----------------------------------+----------------------------------+
    | cpu                             | cumulative | ns        | 5513cc76-60ac-4b86-8f3a-6f891d4d2e0f                                  | 3d2b7c07c2034454b1f868ab7de2bde7 | 2ca2bddd4938459ea9c86f86c585fdc8 |
    | cpu                             | cumulative | ns        | 6540bf2a-b3da-4313-9cb3-5a89d3b7cfae                                  | b5f31bac38cb483e95b47fd8daf7ae2c | 3f331d90547642d6aaf67e34196d568e |
    | cpu_util                        | gauge      | %         | 5513cc76-60ac-4b86-8f3a-6f891d4d2e0f                                  | 3d2b7c07c2034454b1f868ab7de2bde7 | 2ca2bddd4938459ea9c86f86c585fdc8 |
    | cpu_util                        | gauge      | %         | 6540bf2a-b3da-4313-9cb3-5a89d3b7cfae                                  | b5f31bac38cb483e95b47fd8daf7ae2c | 3f331d90547642d6aaf67e34196d568e |
    | disk.allocation                 | gauge      | B         | 5513cc76-60ac-4b86-8f3a-6f891d4d2e0f                                  | 3d2b7c07c2034454b1f868ab7de2bde7 | 2ca2bddd4938459ea9c86f86c585fdc8 |
    | disk.capacity                   | gauge      | B         | 5513cc76-60ac-4b86-8f3a-6f891d4d2e0f                                  | 3d2b7c07c2034454b1f868ab7de2bde7 | 2ca2bddd4938459ea9c86f86c585fdc8 |
    ....

I am also able to verify that data is being collected as expected...

[root@usewdcld001 api]# ceilometer sample-show  42875416-4aae-11e5-91bf-3ca82a200ff0
+-------------+--------------------------------------------------------------------------+
| Property    | Value                                                                    |
+-------------+--------------------------------------------------------------------------+
| id          | 42875416-4aae-11e5-91bf-3ca82a200ff0                                     |
| metadata    | {"ramdisk_id": "None", "flavor.vcpus": "1", "OS-EXT-                     |
|             | AZ.availability_zone": "nova", "display_name": "testStack-server-        |
|             | t45ob46fukpb", "flavor.ram": "512", "status": "active", "ephemeral_gb":  |
|             | "0", "flavor.name": "m1.tiny", "disk_gb": "1", "kernel_id": "None",      |
|             | "image.id": "072415f2-f9c0-4df5-b709-79a844b08962", "flavor.id": "1",    |
|             | "host": "96cb7596eeb4c186b64e8b07fd1367644f9e5b98d699f436817998bc",      |
|             | "device": "['vda']", "flavor.ephemeral": "0", "image.name":              |
|             | "cirros-0.3.4-x86_64", "image_ref_url": "http://controller:8774/302dc641 |
|             | 47044c32bcc7584073342594/images/072415f2-f9c0-4df5-b709-79a844b08962",   |
|             | "image.links": "[{'href': 'http://controller:8774/302dc64147044c32bcc758 |
|             | 4073342594/images/072415f2-f9c0-4df5-b709-79a844b08962', 'rel':          |
|             | 'bookmark'}]", "flavor.disk": "1", "root_gb": "1", "name":               |
|             | "instance-00000005", "memory_mb": "512", "instance_type": "1", "vcpus":  |
|             | "1", "image_ref": "072415f2-f9c0-4df5-b709-79a844b08962",                |
|             | "flavor.links": "[{'href':                                               |
|             | 'http://controller:8774/302dc64147044c32bcc7584073342594/flavors/1',     |
|             | 'rel': 'bookmark'}]"}                                                    |
| meter       | disk.write.requests                                                      |
| project_id  | 2ca2bddd4938459ea9c86f86c585fdc8                                         |
| recorded_at | 2015-08-24T22:20:03.043000                                               |
| resource_id | 5513cc76-60ac-4b86-8f3a-6f891d4d2e0f                                     |
| source      | openstack                                                                |
| timestamp   | 2015-08-24T22:20:00                                                      |
| type        | cumulative                                                               |
| unit        | request                                                                  |
| user_id     | 3d2b7c07c2034454b1f868ab7de2bde7                                         |
| volume      | 81.0                                                                     |
+-------------+--------------------------------------------------------------------------+

[root@usewdcld001 api]# ceilometer statistics -m network.incoming.bytes -p 60
+--------+---------------------+---------------------+---------+---------+---------+---------+-------+----------+---------------------+---------------------+
| Period | Period Start        | Period End          | Max     | Min     | Avg     | Sum     | Count | Duration | Duration Start      | Duration End        |
+--------+---------------------+---------------------+---------+---------+---------+---------+-------+----------+---------------------+---------------------+
| 60     | 2015-08-24T18:29:43 | 2015-08-24T18:30:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T18:30:00 | 2015-08-24T18:30:00 |
| 60     | 2015-08-24T18:39:43 | 2015-08-24T18:40:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T18:40:00 | 2015-08-24T18:40:00 |
| 60     | 2015-08-24T18:49:43 | 2015-08-24T18:50:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T18:50:01 | 2015-08-24T18:50:01 |
| 60     | 2015-08-24T18:59:43 | 2015-08-24T19:00:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T19:00:00 | 2015-08-24T19:00:00 |
| 60     | 2015-08-24T19:09:43 | 2015-08-24T19:10:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T19:10:00 | 2015-08-24T19:10:00 |
| 60     | 2015-08-24T19:19:43 | 2015-08-24T19:20:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T19:20:00 | 2015-08-24T19:20:00 |
| 60     | 2015-08-24T19:29:43 | 2015-08-24T19:30:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T19:30:00 | 2015-08-24T19:30:00 |
| 60     | 2015-08-24T19:39:43 | 2015-08-24T19:40:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T19:40:00 | 2015-08-24T19:40:00 |
| 60     | 2015-08-24T19:49:43 | 2015-08-24T19:50:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T19:50:00 | 2015-08-24T19:50:00 |
| 60     | 2015-08-24T19:59:43 | 2015-08-24T20:00:43 | 64084.0 | 21885.0 | 42984.5 | 85969.0 | 2     | 0.0      | 2015-08-24T20:00:00 | 2015-08-24T20:00:00 |
| 60     | 2015-08-24T20:09:43 ...
(more)
2014-10-13 08:48:01 -0600 received badge  Famous Question (source)
2014-10-04 21:03:54 -0600 received badge  Student (source)
2014-09-20 02:12:43 -0600 received badge  Notable Question (source)
2014-09-20 02:12:43 -0600 received badge  Famous Question (source)
2014-09-20 02:12:43 -0600 received badge  Popular Question (source)
2014-08-07 00:19:53 -0600 received badge  Famous Question (source)
2014-07-17 13:16:41 -0600 received badge  Notable Question (source)
2014-07-15 22:23:35 -0600 received badge  Famous Question (source)
2014-06-26 09:14:36 -0600 received badge  Popular Question (source)
2014-06-25 19:13:43 -0600 received badge  Famous Question (source)
2014-06-25 17:36:42 -0600 received badge  Notable Question (source)
2014-06-17 10:16:42 -0600 received badge  Notable Question (source)
2014-06-17 10:16:42 -0600 received badge  Popular Question (source)
2014-06-12 05:46:08 -0600 received badge  Supporter (source)
2014-06-12 05:46:03 -0600 received badge  Scholar (source)
2014-06-11 12:50:52 -0600 commented answer Caught error: (ProgrammingError) (1146, "Table 'cinder.volumes' doesn't exist")

Yes it is a fresh install.

cat /etc/cinder/cinder.conf | grep -i "^sql_connection" returns nothing

Table exists:

osadmin@controller:/tmp$ mysql -u cinder -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 167
Server version: 5.5.37-0ubuntu0.14.04.1 (Ubuntu)

Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use cinder;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+--------------------------+
| Tables_in_cinder         |
+--------------------------+
| backups                  |
| encryption               |
| iscsi_targets            |
| migrate_version          |
| quality_of_service_specs |
| quota_classes            |
| quota_usages             |
| quotas                   |
| reservations             |
| services                 |
| snapshot_metadata        |
| snapshots                |
| transfers                |
| volume_admin_metadata    |
| volume_glance_metadata   |
| volume_metadata          |
| volume_type_extra_specs  |
| volume_types             |
| volumes ...
(more)
2014-06-11 09:18:08 -0600 received badge  Notable Question (source)
2014-06-11 09:14:43 -0600 asked a question Caught error: (ProgrammingError) (1146, "Table 'cinder.volumes' doesn't exist")

So I am having issues with getting cinder volumes to create. I run the command and the volume sits and creating indefinitely. I see this error in cinder-api.log:

 ERROR cinder.api.middleware.fault [req-57d935ef-1268-4749-adcb-ab409ecdd72c 1a7896d255cf403a8a6ef4f06834d990 0d788298788f4128bbb1a6f3a1b39034 - - -] Caught error: (ProgrammingError) (1146, "Table 'cinder.volumes' doesn't exist") 'SELECT anon_1.volumes_created_at AS anon_1_volumes_created_at, anon_1.volumes_updated_at AS anon_1_volumes_updated_at, anon_1.volumes_deleted_at AS anon_1_volumes_deleted_at, anon_1.volumes_id AS anon_1_volumes_id, anon_1.volumes__name_id AS anon_1_volumes__name_id, anon_1.volumes_ec2_id AS anon_1_volumes_ec2_id, anon_1.volumes_user_id AS anon_1_volumes_user_id, anon_1.volumes_project_id AS anon_1_volumes_project_id, anon_1.volumes_snapshot_id AS anon_1_volumes_snapshot_id, anon_1.volumes_host AS anon_1_volumes_host, anon_1.volumes_size AS anon_1_volumes_size, anon_1.volumes_availability_zone AS anon_1_volumes_availability_zone, anon_1.volumes_instance_uuid AS anon_1_volumes_instance_uuid, anon_1.volumes_attached_host AS anon_1_volumes_attached_host, anon_1.volumes_mountpoint AS anon_1_volumes_mountpoint, anon_1.volumes_attach_time AS anon_1_volumes_attach_time, anon_1.volumes_status AS anon_1_volumes_status, anon_1.volumes_attach_status AS anon_1_volumes_attach_status, anon_1.volumes_migration_status AS anon_1_volumes_migration_status, anon_1.volumes_scheduled_at AS anon_1_volumes_scheduled_at, anon_1.volumes_launched_at AS anon_1_volumes_launched_at, anon_1.volumes_terminated_at AS anon_1_volumes_terminated_at, anon_1.volumes_display_name AS anon_1_volumes_display_name, anon_1.volumes_display_description AS anon_1_volumes_display_description, anon_1.volumes_provider_location AS anon_1_volumes_provider_location, anon_1.volumes_provider_auth AS anon_1_volumes_provider_auth, anon_1.volumes_provider_geometry AS anon_1_volumes_provider_geometry, anon_1.volumes_volume_type_id AS anon_1_volumes_volume_type_id, anon_1.volumes_source_volid AS anon_1_volumes_source_volid, anon_1.volumes_encryption_key_id AS anon_1_volumes_encryption_key_id, anon_1.volumes_deleted AS anon_1_volumes_deleted, anon_1.volumes_bootable AS anon_1_volumes_bootable, volume_admin_metadata_1.created_at AS volume_admin_metadata_1_created_at, volume_admin_metadata_1.updated_at AS volume_admin_metadata_1_updated_at, volume_admin_metadata_1.deleted_at AS volume_admin_metadata_1_deleted_at, volume_admin_metadata_1.deleted AS volume_admin_metadata_1_deleted, volume_admin_metadata_1.id AS volume_admin_metadata_1_id, volume_admin_metadata_1.`key` AS volume_admin_metadata_1_key, volume_admin_metadata_1.value AS volume_admin_metadata_1_value, volume_admin_metadata_1.volume_id AS volume_admin_metadata_1_volume_id, volume_types_1.created_at AS volume_types_1_created_at, volume_types_1.updated_at AS volume_types_1_updated_at, volume_types_1.deleted_at AS volume_types_1_deleted_at, volume_types_1.deleted AS volume_types_1_deleted, volume_types_1.id AS volume_types_1_id, volume_types_1.name AS volume_types_1_name, volume_types_1.qos_specs_id AS volume_types_1_qos_specs_id, volume_metadata_1.created_at AS volume_metadata_1_created_at, volume_metadata_1.updated_at AS volume_metadata_1_updated_at, volume_metadata_1.deleted_at AS volume_metadata_1_deleted_at, volume_metadata_1.deleted AS volume_metadata_1_deleted, volume_metadata_1.id AS volume_metadata_1_id, volume_metadata_1.`key` AS volume_metadata_1_key, volume_metadata_1.value AS volume_metadata_1_value, volume_metadata_1.volume_id AS volume_metadata_1_volume_id \nFROM (SELECT volumes.created_at AS volumes_created_at, volumes.updated_at AS volumes_updated_at, volumes.deleted_at AS volumes_deleted_at, volumes.id AS volumes_id, volumes._name_id AS volumes__name_id, volumes.ec2_id AS volumes_ec2_id, volumes.user_id AS volumes_user_id, volumes.project_id AS volumes_project_id, volumes.snapshot_id AS volumes_snapshot_id, volumes.host AS volumes_host, volumes.size AS volumes_size, volumes.availability_zone AS volumes_availability_zone, volumes.instance_uuid AS volumes_instance_uuid, volumes.attached_host AS volumes_attached_host, volumes.mountpoint AS volumes_mountpoint, volumes.attach_time AS volumes_attach_time, volumes.status AS volumes_status, volumes.attach_status AS volumes_attach_status, volumes.migration_status AS volumes_migration_status, volumes.scheduled_at AS volumes_scheduled_at, volumes.launched_at AS volumes_launched_at, volumes.terminated_at AS volumes_terminated_at, volumes.display_name AS volumes_display_name, volumes.display_description AS volumes_display_description, volumes.provider_location AS volumes_provider_location, volumes.provider_auth AS volumes_provider_auth, volumes.provider_geometry AS volumes_provider_geometry, volumes.volume_type_id AS volumes_volume_type_id, volumes.source_volid AS volumes_source_volid, volumes.encryption_key_id AS volumes_encryption_key_id, volumes.deleted AS volumes_deleted, volumes.bootable AS volumes_bootable \nFROM volumes \nWHERE volumes.deleted = false AND volumes.id = %s \n LIMIT %s) AS anon_1 LEFT OUTER JOIN volume_admin_metadata AS volume_admin_metadata_1 ON volume_admin_metadata_1.volume_id = anon_1.volumes_id AND volume_admin_metadata_1.deleted = false LEFT OUTER JOIN volume_types AS volume_types_1 ON anon_1.volumes_volume_type_id = volume_types_1.id AND volume_types_1.deleted = false LEFT OUTER JOIN volume_metadata AS volume_metadata_1 ON volume_metadata_1.volume_id = anon_1.volumes_id AND volume_metadata_1.deleted = false' ('6184acce-8841-44da-acca-0225b5695d9a', 1)
2014-06-11 09:49:16.744 28831 TRACE cinder.api.middleware.fault     self.errorhandler(self, exc, value)
2014-06-11 09:49:16.744 28831 TRACE cinder.api.middleware.fault   File "/usr/lib/python2.7/dist-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler ...
(more)
2014-06-11 06:35:33 -0600 asked a question Trove Install Fails

So i run the normal install commands and get the following as output ( http://docs.openstack.org/trunk/insta... ):

osadmin@controller:~$ sudo apt-get install python-trove python-troveclient python-glanceclient trove-common trove-api trove-taskmanager -y
Reading package lists... Done
Building dependency tree
Reading state information... Done
python-glanceclient is already the newest version.
python-troveclient is already the newest version.
python-trove is already the newest version.
trove-api is already the newest version.
trove-common is already the newest version.
trove-taskmanager is already the newest version.
0 upgraded, 0 newly installed, 0 to remove and 13 not upgraded.

After creating the trove account in the instructions, I am looking to modify the trove*.conf files and none of them exit in /etc/trove

Am I looking in the right place? A little help would be appreciated

2014-06-11 06:28:17 -0600 received badge  Enthusiast
2014-06-10 12:38:39 -0600 commented question ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500)

Yes. The issue ended up being a typo in the controller's nova.conf file which was causing it to fail during the neutron piece of instance creation.

2014-06-09 19:24:09 -0600 received badge  Popular Question (source)
2014-06-07 14:58:06 -0600 asked a question ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500)

Getting an error when trying to create an instance through both Horizon and commandline which I think I have boiled down to a neutron issue:

nova --debug boot --flavor m1.tiny --image cirros-0.3.0 --nic net-id=86974ee9-16ff-4c25-8ebf-8255aa3fa694 --security-group default --key-name cloudkey demo-instance1

DEBUG (shell:777) The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-f4e78da5-006b-46d1-8a2f-e6242b07bcb7)
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 774, in main
    OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
  File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 710, in main
    args.func(self.cs, args)
  File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py", line 433, in do_boot
    server = cs.servers.create(*boot_args, **boot_kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/servers.py", line 871, in create
    **boot_kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/servers.py", line 534, in _boot
    return_raw=return_raw, **kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 152, in _create
    _resp, body = self.api.client.post(url, body=body)
  File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 286, in post
    return self._cs_request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 260, in _cs_request
    **kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 242, in _time_request
    resp, body = self.request(url, method, **kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 236, in request
    raise exceptions.from_response(resp, body, url, method)
ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-f4e78da5-006b-46d1-8a2f-e6242b07bcb7)
ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-f4e78da5-006b-46d1-8a2f-e6242b07bcb7)

When I run the command without the --nic option it works as expected:

nova boot --flavor m1.tiny --image cirros-0.3.0 --security-group default --key-name cloudkey demo-instance
+--------------------------------------+-----------------------------------------------------+
| Property                             | Value                                               |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                              |
| OS-EXT-AZ:availability_zone          | nova                                                |
| OS-EXT-SRV-ATTR:host                 | -                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000007                                   |
| OS-EXT-STS:power_state               | 0                                                   |
| OS-EXT-STS:task_state                | scheduling                                          |
| OS-EXT-STS:vm_state                  | building                                            |
| OS-SRV-USG:launched_at               | -                                                   |
| OS-SRV-USG:terminated_at             | -                                                   |
| accessIPv4                           |                                                     |
| accessIPv6                           |                                                     |
| adminPass                            | 6rCpeTZY7i5n                                        |
| config_drive                         |                                                     |
| created                              | 2014-06-06T18:48:15Z                                |
| flavor                               | m1.tiny (1)                                         |
| hostId                               |                                                     |
| id                                   | e0019d22-0eab-4d7f-970b-3914c32408e4                |
| image                                | cirros-0.3.0 (11681c6e-337e-462f-b051-64613c798837) |
| key_name                             | cloudkey                                            |
| metadata                             | {}                                                  |
| name                                 | demo-instance                                       |
| os-extended-volumes:volumes_attached | []                                                  |
| progress                             | 0                                                   |
| security_groups                      | default                                             |
| status                               | BUILD                                               |
| tenant_id                            | 0d788298788f4128bbb1a6f3a1b39034                    |
| updated                              | 2014-06-06T18:48:16Z                                |
| user_id                              | 1a7896d255cf403a8a6ef4f06834d990                    |
+--------------------------------------+-----------------------------------------------------+

Here is the tail from /var/log/neutron/server.log:

tail /var/log/neutron/server.log
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource     obj = obj_updater(request.context, id, **kwargs)
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 671, in update_port
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource     port)
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource   File "/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 1412, in update_port
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource     and (changed_device_id or changed_device_owner)):
2014-06-06 13 ...
(more)
2014-06-07 09:10:35 -0600 received badge  Popular Question (source)
2014-06-07 08:46:48 -0600 answered a question How to Start/Stop OpenStack services manually

sudo service xxxx stop

sudo service xxxx start

where xxxx is the name of a running service

http://docs.openstack.org/trunk/insta... should have most of the services you need to stop and start listed in the sections around the installation of each piece.

2014-06-07 08:29:44 -0600 commented question Error: The server has either erred or is incapable of performing the requested operation. (HTTP 500)

I did search the site. Just because I am new to the boards, don't assume i didn't do my due diligence, it's extremely unprofessional.

from https://ask.openstack.org/en/question...

This setting already exists

https://ask.openstack.org/en/question...

Not using nova network

https://ask.openstack.org/en/question...

Not using havana or packstack

https://ask.openstack.org/en/question...

Not having SQL failure issues

https://ask.openstack.org/en/question...

rabbit host is correct on all compute nodes

https://ask.openstack.org/en/question...

While using neutron (no longer quantum) my settings are in the default area and everything has been rebooted

2014-06-06 13:58:26 -0600 asked a question Error: The server has either erred or is incapable of performing the requested operation. (HTTP 500)

Getting an error when trying to create an instance through both Horizon and commandline which I think I have boiled down to a neutron issue:

nova --debug boot --flavor m1.tiny --image cirros-0.3.0 --nic net-id=86974ee9-16ff-4c25-8ebf-8255aa3fa694 --security-group default --key-name cloudkey demo-instance1

DEBUG (shell:777) The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-f4e78da5-006b-46d1-8a2f-e6242b07bcb7)
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 774, in main
    OpenStackComputeShell().main(map(strutils.safe_decode, sys.argv[1:]))
  File "/usr/lib/python2.7/dist-packages/novaclient/shell.py", line 710, in main
    args.func(self.cs, args)
  File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/shell.py", line 433, in do_boot
    server = cs.servers.create(*boot_args, **boot_kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/servers.py", line 871, in create
    **boot_kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/v1_1/servers.py", line 534, in _boot
    return_raw=return_raw, **kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/base.py", line 152, in _create
    _resp, body = self.api.client.post(url, body=body)
  File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 286, in post
    return self._cs_request(url, 'POST', **kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 260, in _cs_request
    **kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 242, in _time_request
    resp, body = self.request(url, method, **kwargs)
  File "/usr/lib/python2.7/dist-packages/novaclient/client.py", line 236, in request
    raise exceptions.from_response(resp, body, url, method)
ClientException: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-f4e78da5-006b-46d1-8a2f-e6242b07bcb7)
ERROR: The server has either erred or is incapable of performing the requested operation. (HTTP 500) (Request-ID: req-f4e78da5-006b-46d1-8a2f-e6242b07bcb7)

When I run the command without the --nic option it works as expected:

nova boot --flavor m1.tiny --image cirros-0.3.0 --security-group default --key-name cloudkey demo-instance
+--------------------------------------+-----------------------------------------------------+
| Property                             | Value                                               |
+--------------------------------------+-----------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                              |
| OS-EXT-AZ:availability_zone          | nova                                                |
| OS-EXT-SRV-ATTR:host                 | -                                                   |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                   |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000007                                   |
| OS-EXT-STS:power_state               | 0                                                   |
| OS-EXT-STS:task_state                | scheduling                                          |
| OS-EXT-STS:vm_state                  | building                                            |
| OS-SRV-USG:launched_at               | -                                                   |
| OS-SRV-USG:terminated_at             | -                                                   |
| accessIPv4                           |                                                     |
| accessIPv6                           |                                                     |
| adminPass                            | 6rCpeTZY7i5n                                        |
| config_drive                         |                                                     |
| created                              | 2014-06-06T18:48:15Z                                |
| flavor                               | m1.tiny (1)                                         |
| hostId                               |                                                     |
| id                                   | e0019d22-0eab-4d7f-970b-3914c32408e4                |
| image                                | cirros-0.3.0 (11681c6e-337e-462f-b051-64613c798837) |
| key_name                             | cloudkey                                            |
| metadata                             | {}                                                  |
| name                                 | demo-instance                                       |
| os-extended-volumes:volumes_attached | []                                                  |
| progress                             | 0                                                   |
| security_groups                      | default                                             |
| status                               | BUILD                                               |
| tenant_id                            | 0d788298788f4128bbb1a6f3a1b39034                    |
| updated                              | 2014-06-06T18:48:16Z                                |
| user_id                              | 1a7896d255cf403a8a6ef4f06834d990                    |
+--------------------------------------+-----------------------------------------------------+

Here is the tail from /var/log/neutron/server.log:

tail /var/log/neutron/server.log
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource     obj = obj_updater(request.context, id, **kwargs)
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource   File "/usr/lib/python2.7/dist-packages/neutron/plugins/ml2/plugin.py", line 671, in update_port
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource     port)
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource   File "/usr/lib/python2.7/dist-packages/neutron/db/db_base_plugin_v2.py", line 1412, in update_port
2014-06-06 13:00:37.980 8994 TRACE neutron.api.v2.resource     and (changed_device_id or changed_device_owner)):
2014-06-06 13 ...
(more)
2014-06-04 08:13:15 -0600 received badge  Self-Learner (source)
2014-06-04 08:13:15 -0600 received badge  Teacher (source)
2014-06-04 07:59:57 -0600 answered a question Issues with Swift Proxy Service

Resolved. Issue was missing folder /home/swift as well as perms on that folder. Able to start it now.

2014-06-04 06:32:53 -0600 asked a question Issues with Swift Proxy Service

I recently did a fresh install of an OS environment using the config guide for Ubuntu 14.04LTS. I am having an issue with the swift proxy server service failing to start. Any help would be appreciated. Below is my proxy-server.conf file:

[DEFAULT]
bind_port = 8080
user = swift

[pipeline:main]
pipeline = healthcheck cache authtoken keystoneauth ceilometer proxy-server

[app:proxy-server]
use = egg:swift#proxy
allow_account_management = true
account_autocreate = true

[filter:keystoneauth]
use = egg:swift#keystoneauth
operator_roles = Member,admin,swiftoperator

[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory

# Delaying the auth decision is required to support token-less
# usage for anonymous referrers ('.r:*').
delay_auth_decision = true

# cache direc# auth_* settings refer to the Keystone server
auth_protocol = http
auth_host = controller
auth_port = 35357

# the service tenant and swift username and password created in Keystone
admin_tenant_name = service
admin_user = swift
admin_password = myPassword

[filter:cache]
use = egg:swift#memcache

[filter:catch_errors]
use = egg:swift#catch_errors

[filter:healthcheck]
use = egg:swift#healthcheck

[filter:ceilometer]
use = egg:ceilometer#swift
tory for signing certificate
signing_dir = /home/swift/keystone-signing

:~$ sudo service swift-proxy restart stop: Unknown instance: start: Job failed to start