Ask Your Question

M. Piscaer's profile - activity

2020-01-21 03:40:16 -0500 received badge  Famous Question (source)
2019-07-09 06:17:27 -0500 received badge  Notable Question (source)
2019-07-07 23:47:55 -0500 received badge  Popular Question (source)
2019-07-05 06:03:08 -0500 asked a question Add service to the keystone service catalog

Hi,

I try to install Mistral via Openstack-ansible 18.1.8.

We have set horizon_keystone_multidomain_support on True.

To install Mistral, I run the following command: /usr/local/bin/openstack-ansible /opt/openstack-ansible/playbooks/os-mistral-install.yml

At task: TASK [os_mistral : Add service to the keystone service catalog] I get the failure:

openstack.exceptions.ConfigException: Cloud default was not found.

When I do openstack domain list, I get the following:

root@controller01-utility-container-3f4d1949:~# openstack domain list
+----------------------------------+---------+---------+--------------------+
| ID                               | Name    | Enabled | Description        |
+----------------------------------+---------+---------+--------------------+
| 9b365a6e9ad8483f8dd50de1b39b2afa | heat    | True    |                    |
| d142e7b3f436472ab5db39a1bf2bc15e | cgm.ag  | True    |                    |
| default                          | Default | True    | The default domain |
+----------------------------------+---------+---------+--------------------+
2019-05-20 03:01:49 -0500 answered a question Issue with creating a volume from image via cinder

On the cinder-volume and cinder-backup nodes we changed in the cinder.conf the following variable: host = ceph-backend.

After that we restart the cinder service and removed the old cinder volume and backup service:

/openstack/venvs/cinder-18.1.4/bin/cinder-manage service remove cinder-backup controller1

This will result in:

# openstack volume service list
+------------------+---------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                  | Zone | Status  | State | Updated At                 |
+------------------+---------------------------------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller1-cinder-api-container-729fc731 | nova | enabled | up    | 2019-05-20T06:50:13.000000 |
| cinder-scheduler | controller3-cinder-api-container-d4b77c12 | nova | enabled | up    | 2019-05-20T06:50:13.000000 |
| cinder-scheduler | controller2-cinder-api-container-42579f3d | nova | enabled | up    | 2019-05-20T06:50:07.000000 |
| cinder-backup    | ceph-backend                          | nova | enabled | up    | 2019-05-20T06:50:12.000000 |
| cinder-volume    | ceph-backend@RBD                      | nova | enabled | up    | 2019-05-20T06:50:12.000000 |
+------------------+---------------------------------------+------+---------+-------+----------------------------+
2019-05-08 04:16:24 -0500 received badge  Enthusiast
2019-05-06 01:28:58 -0500 received badge  Famous Question (source)
2019-05-03 04:30:35 -0500 commented question Issue with creating a volume from image via cinder

I have used Openstack-ansible to deploy the cluster.

The VIP is active on the HAproxy and the haproxy is balancing the requests to the different cinders.

This way one of the cinder-volumes can stop working and one of the others will take over.

2019-05-02 19:53:21 -0500 commented question Issue with creating a volume from image via cinder

I added openstack volume show command

2019-05-02 11:23:41 -0500 received badge  Notable Question (source)
2019-05-02 07:28:23 -0500 commented question Issue with creating a volume from image via cinder

I just updated the question

2019-05-02 06:32:05 -0500 received badge  Editor (source)
2019-05-02 05:21:50 -0500 received badge  Popular Question (source)
2019-05-01 09:14:36 -0500 asked a question Issue with creating a volume from image via cinder

Hi,

I have an issue with creating volumes from a image. The glance and cinder has Backend ceph.

I have a raw image that I uploaded via Horizon. This image will be placed on the Ceph cluster by Horizon.

From this image I create a volume, after that I create a image from that volume.

If this all done with on the same cinder-volume there is no problem.

The Openstack cluster is HA via HAproxy and it is loadbalanced via three controllers, and the installed Openstack version is: Rocky 3.16.2.

root@controller3-utility-container-db19e661:~# openstack volume service list
+------------------+---------------------------------------+------+---------+-------+----------------------------+
| Binary           | Host                                  | Zone | Status  | State | Updated At                 |
+------------------+---------------------------------------+------+---------+-------+----------------------------+
| cinder-volume    | controller3@RBD                           | nova | enabled | up    | 2019-05-02T08:45:08.000000 |
| cinder-backup    | controller1                               | nova | enabled | up    | 2019-05-02T08:44:59.000000 |
| cinder-backup    | controller2                               | nova | enabled | up    | 2019-05-02T08:45:04.000000 |
| cinder-backup    | controller3                               | nova | enabled | up    | 2019-05-02T08:45:00.000000 |
| cinder-scheduler | controller1-cinder-api-container-729fc731 | nova | enabled | up    | 2019-05-02T08:44:59.000000 |
| cinder-volume    | controller1@RBD                           | nova | enabled | up    | 2019-05-02T08:44:59.000000 |
| cinder-volume    | controller2@RBD                           | nova | enabled | up    | 2019-05-02T08:44:58.000000 |
| cinder-scheduler | controller3-cinder-api-container-d4b77c12 | nova | enabled | up    | 2019-05-02T08:45:05.000000 |
| cinder-scheduler | controller2-cinder-api-container-42579f3d | nova | enabled | up    | 2019-05-02T08:45:02.000000 |
+------------------+---------------------------------------+------+---------+-------+----------------------------+

To reproduce the issue I do the following steps:

I have uploaded a image-bionic with ID and direct_url:

id: ceea8588-a95c-40b2-a7e2-310413725b8d
direct_url: rbd://956c320d-b32f-4851-a5e7-2e7728df33b1/images/ceea8588-a95c-40b2-a7e2-310413725b8d/snap

1- Stop cinder-volume on controller1 & controller3

2- create volume from image-bionic and name them bionic-controller2

# cinder create --image ceea8588-a95c-40b2-a7e2-310413725b8d --name bionic-controller2 30 This creates volume ID: 7fe4a203-13db-4369-bdfe-fd9f5481b8a9

3- make a image from image-bionic and name them bionic-controller2-image

# cinder upload-to-image --force True --container-format bare --disk-format raw 7fe4a203-13db-4369-bdfe-fd9f5481b8a9 bionic-controller2-image

This creates image ID: 2f71e414-cba5-475f-a7bb-f2c10755878c

5- create volume from image image-bionic

# cinder create --image 2f71e414-cba5-475f-a7bb-f2c10755878c --name bionic-controller2-image--bionic-controller2 30

6- stop on controller2 cinder-volume and start on controller3 cinder-volume

7- Create a image from bionic-controller2, this the image sticks on status "uploading", till controller2 is back up.

# cinder upload-to-image --force True --container-format bare --disk-format raw 7fe4a203-13db-4369-bdfe-fd9f5481b8a9 bionic-controller3-image

8- Create a volume from bionic-controller2-image, gets an error state

# cinder create --image 2f71e414-cba5-475f-a7bb-f2c10755878c --name bionic-controller2-image--bionic-controller3 30

The systemctl status cinder-volume on the controller3 says that it wants to do a rbd import, But I thought that is should clone the disk

9- If I create a volume from image-bionic and then from that volume a new image, that will be successful

In the /etc/cinder/cinder.conf I have the setting

image_upload_use_cinder_backend = True

Is this related or do we have a different issue?

Edit 2:

I deleted the old machine, so I first needed to create a new volume:

root@controller3-utility-container-db19e661:~# cinder create --image ceea8588-a95c-40b2-a7e2-310413725b8d --name bionic-controller2 30
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attachments                    | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2019-05-02T14:14:21.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | cc5692ec-fb7f-4848-b45f-943131a08254 |
| metadata                       | {}                                   |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | bionic-controller2                   |
| os-vol-host-attr:host          | None                                 |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 19b2a69ef0504e858e4cccd84201a364     |
| replication_status             | None                                 |
| size                           | 30                                   |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | creating                             |
| updated_at                     | None                                 |
| user_id                        | c11d00fd2bcc40019b744d1cef81268b     |
| volume_type                    | RBD                                  |
+--------------------------------+--------------------------------------+

root@controller3-utility-container-db19e661:~# openstack volume show cc5692ec-fb7f-4848-b45f-943131a08254
+--------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                          | Value                                                                                                                                                                                                                                                                                                                                                                                                       |
+--------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| attachments                    | []                                                                                                                                                                                                       |
| availability_zone              | nova
| bootable                       | true
| consistencygroup_id            | None
| created_at                     | 2019-05-02T14:14 ...
(more)