Ask Your Question

eblock's profile - activity

2019-12-13 08:07:25 -0600 commented question ceph how work

That makes it slower for one client, but because of the scalability it's a lot faster for many parallel clients. Each PG has a primary OSD which replicates its data to the other OSDs when writing, the "ack" happens when all replicas have been written. Reads are much faster (only primary PG is read).

2019-12-13 08:03:45 -0600 commented question ceph how work

Read the docs for more detailed information. Basically, one client writes to multiple placement groups (PG) at the same time, each PG and its replicas lies on multiple OSDs (disks). So the client writes to multiple OSDs simultaneously.

2019-12-12 01:47:04 -0600 commented question How to Delete Orphaned Router Port

Just update your question to add more output. Also please look into neutron logs and paste related error messages.

2019-12-11 01:34:03 -0600 commented question How to Delete Orphaned Router Port

But the ... router show output shows multiple interfaces (interfaces_info) and one external gateway. These ports need to be removed before you can remove the router.

2019-12-11 01:29:56 -0600 commented question create_instance.py Delete Volume on Instance Delete

I tried a couple of changes yesterday in some of these places to change the setting as you need it but to no avail. Maybe you should ask the openstack mailing list.

2019-12-11 01:28:11 -0600 commented question create_instance.py Delete Volume on Instance Delete

I had to deal with some of this multiple times (e.g. bug), the create_instance.py is not used anymore (as far as I could debug), they changed the create_instance workflow to launch-instance-model.service.js (and similar) underneath /srv/www/openstack..

2019-12-10 08:57:47 -0600 commented question [ceph] Does replication in osds and pools add up?

Well, you could use rados -p <pool> get <object> <local-file> to store all those orphaned data on a local filesystem (rados put ... to import back to ceph), but I don't know if ceph will be able to recover objects replayed like that, I haven't tried that. Just make sure those are really orphans.

2019-12-10 08:28:20 -0600 commented question [ceph] Does replication in osds and pools add up?

Alright, sounds like there are quite a lot orphans, right? But before you purge those objects make sure that they really really don't belong to any existing rbd object or are related to any client etc. And then double and triple check, before deleting anything. ;-)

2019-12-10 05:16:09 -0600 commented question [ceph] Does replication in osds and pools add up?

Adjust the for loop to print those IDs that are not found so you get the orphans.

2019-12-10 05:13:53 -0600 commented question [ceph] Does replication in osds and pools add up?

middle part:

[rbd_data.]d18c7866334873[.00...7203]

Remove all duplicates of that list and then check if an actual object exists:

for i in `cat remainder`; do for j in `rbd -p volumes ls`; do if [ $(rbd info volumes/$j | grep -c $i) -gt 0 ]; then echo "Image: $j"; break; fi; done; done
2019-12-10 05:10:11 -0600 commented question [ceph] Does replication in osds and pools add up?

No, not for rbd objects. I believe there's only an orphan-scan for rados-gateway, and something similar for cephfs. But you can check if you find an rbd object for every rbd_data prefix you have in the pool. So you create a list of all objects in the pool rados -p volumes ls and only keep the...

2019-12-10 04:38:16 -0600 commented question How to Delete Orphaned Router Port

Can you show the neutron logs from your attempt to delete the port via CLI? Maybe turn on debug logs first. Have you made sure that no instance is connected to that port? Clearly, the router uses that port as a gateway, so maybe you should delete all other interfaces first and then try again.

2019-12-10 03:41:29 -0600 commented question [ceph] Does replication in osds and pools add up?

Of course it is possible, I wrote that a couple of comments ago ;-) It's possible to delete instances etc. but in fact those rbd images can still exist in ceph if they had snapshots before. So you should not just compare if your openstack images etc. exist in ceph but also the other way around.

2019-12-09 04:51:43 -0600 commented question How to Delete Orphaned Router Port

It sounds like the neutron components don't communicate properly with each other. Can you paste openstack router show 394798a4-7d41-443e-b2ba-551157a9115e? Please add the output to the question (you can edit it). I took the liberty to edit your question and used the code format to make it readable

2019-12-09 04:47:05 -0600 edited question How to Delete Orphaned Router Port

Installed Stein with packstack. I am trying to delete 'public' net, default router port and see the following in the logs. I would like to delete the router also but first things first.

The error states does not have this interface but I am selecting it on the gui for deletion,

Any suggestions? I tried the cli

# tail -f /var/log/horizon/horizon.log

2019-12-07 02:26:07,555 54212 INFO openstack_dashboard.dashboards.project.routers.ports.tables Failed to delete interface 9983a6ab-38f0-433e-83aa-b10c7c65aeef: Router 394798a4-7d41-443e-b2ba-551157a9115e does not have an interface with id 9983a6ab-38f0-433e-83aa-b10c7c65aeef
Neutron server returns request_ids: ['req-b7e26c1c-0353-4a76-8c9c-37d83c622833']
2019-12-07 02:26:07,555 54212 WARNING horizon.tables.actions Action (u'deleted interface', u'(9983a6ab-38f0)') Failed for Router 394798a4-7d41-443e-b2ba-551157a9115e does not have an interface with id 9983a6ab-38f0-433e-83aa-b10c7c65aeef

NOW THE CLI DELETE ATTEMPT:

(neutron) port-show 9983a6ab-38f0-433e-83aa-b10c7c65aeef
+-----------------------+------------------------------------------------------------------------------------+
| Field                 | Value                                                                              |
+-----------------------+------------------------------------------------------------------------------------+
| admin_state_up        | False                                                                              |
| allowed_address_pairs |                                                                                    |
| binding:host_id       | openstack-master-burl                                                              |
| binding:profile       | {}                                                                                 |
| binding:vif_details   | {"port_filter": true}                                                              |
| binding:vif_type      | ovs                                                                                |
| binding:vnic_type     | normal                                                                             |
| created_at            | 2019-10-11T17:25:04Z                                                               |
| description           |                                                                                    |
| device_id             | 394798a4-7d41-443e-b2ba-551157a9115e                                               |
| device_owner          | compute:None                                                                       |
| extra_dhcp_opts       |                                                                                    |
| fixed_ips             | {"subnet_id": "b2bf847f-6999-4d9e-937f-3ab3a0cde7db", "ip_address": "172.24.4.20"} |
| id                    | 9983a6ab-38f0-433e-83aa-b10c7c65aeef                                               |
| mac_address           | fa:16:3e:4a:08:46                                                                  |
| name                  |                                                                                    |
| network_id            | 0ef79f7e-4d10-43a9-9aea-84651da1e082                                               |
| port_security_enabled | False                                                                              |
| project_id            |                                                                                    |
| qos_policy_id         |                                                                                    |
| resource_request      |                                                                                    |
| revision_number       | 63                                                                                 |
| security_groups       |                                                                                    |
| status                | DOWN                                                                               |
| tags                  |                                                                                    |
| tenant_id             |                                                                                    |
| updated_at            | 2019-11-26T23:25:29Z                                                               |
+-----------------------+------------------------------------------------------------------------------------+


(neutron) port-delete 9983a6ab-38f0-433e-83aa-b10c7c65aeef
Request Failed: internal server error while processing your request.
Neutron server returns request_ids: ['req-44377cf4-f8ae-4885-9a23-514661c1e56d']

EDIT original question add "openstack router show"

~(keystone_admin)]# openstack router show 394798a4-7d41-443e-b2ba-551157a9115e
+-------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field                   | Value                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
+-------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| admin_state_up          | UP                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| availability_zone_hints |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
| availability_zones      | nova                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
| created_at              | 2019-10-11T17:25:03Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
| description             |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
| distributed             | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| external_gateway_info   | {"network_id": "0ef79f7e-4d10-43a9-9aea-84651da1e082", "enable_snat": true, "external_fixed_ips": [{"subnet_id": "b2bf847f-6999-4d9e-937f-3ab3a0cde7db", "ip_address": "172.24.4.20"}]}                                                                                                                                                                                                                                                                                                                                                                        |
| flavor_id               | None                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
| ha                      | False                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| id                      | 394798a4-7d41-443e-b2ba-551157a9115e                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
| interfaces_info         | [{"subnet_id": "67d639c2-8a35-4d69-af84-cd77acab6314", "ip_address": "192.168.2.1", "port_id": "82f0774d-9297-418b-911f-2b1743c1a349"}, {"subnet_id": "4b10dbce-231a-4af6-a7b9-eb130849cf33", "ip_address": "192.168.122.1", "port_id": "8d09f6e8-c657-45db-9211-93f1cd36523e"}, {"subnet_id": "b2bf847f-6999-4d9e-937f-3ab3a0cde7db", "ip_address": "172.24.4.20", "port_id": "9983a6ab-38f0-433e-83aa-b10c7c65aeef"}, {"subnet_id": "0428a633-f88d-46f4-b8c8-9fdd9487c552", "ip_address": "192.168.1.1", "port_id": "9c08d89f-6b8c-415f-b5de-4a46984826a3"}] |
| location                | Munch({'project': Munch({'domain_name': None, 'domain_id': None, 'name': None, 'id': u'696f67de21604135ad2603e1ac1eeb77'}), 'cloud': '', 'region_name': 'RegionOne', 'zone': None})                                                                                                                                                                                                                                                                                                                                                                            |
| name                    | MyRouter                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
| project_id              | 696f67de21604135ad2603e1ac1eeb77                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
| revision_number         | 13                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
| routes                  |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
| status                  | ACTIVE                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| tags                    |                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
| updated_at              | 2019-11-14T22:05:33Z                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+-------------------------+-----------------------------------------------------------------------------------------------------------------------------------------

Thank you for your help. Paul

2019-12-06 03:13:29 -0600 commented question [ceph] Does replication in osds and pools add up?

...Since they're copy-on-write their usage increases as soon as the data changes and that leads to more than the assigned 1.8 TB. Of course some overhead has to be taken into account, too. Unfortunately, I didn't take the time to watch the usage in our cluster in the beginning, now it's too large.

2019-12-06 03:10:40 -0600 commented question [ceph] Does replication in osds and pools add up?

I don't have a definite answer for you, but especially the statistics (usage, availability etc.) in ceph are still not really clear and differ between versions. My assumption is that although your volumes just use 1.8 TB there are obviously snapshots present...

2019-12-05 09:08:43 -0600 commented answer nova rebuild fails when the instance has ceph snapshot

Although we're already running Nautilus (and this also happens there) our cloud still runs Ocata, so it's hard to say if this still applies. We plan to upgrade OpenStack to a current release, I'm curious if this has been fixed since Ceph has become one of the most used storage backends for openstack

2019-12-05 01:56:46 -0600 commented question [ceph] Does replication in osds and pools add up?

The overhead is not that big, that's not it. But I see that your nearfull_ratio is at 67%, that is quite low, the default here is 85%. Is there a reason for such a low value?

2019-12-04 08:54:48 -0600 commented question issue launching an instance from Dashboard

Please use code tags to make your question and the relevant information readable.

2019-12-04 05:45:38 -0600 commented question [ceph] Does replication in osds and pools add up?

The command rbd -p <pool> ls --long shows all relationships (glance images and their clones). Check for orphaned snapshots that are not needed anymore. If there isn't anything to cleanup make sure to expand your ceph cluster. Also check if your PG placement is equally balanced (ceph osd df).

2019-12-04 05:43:09 -0600 commented question [ceph] Does replication in osds and pools add up?

Yep, Luminous shows net value in rados df. So your volumes pool consumes around 3 TB at the moment, in total 9 TB, the numbers make sense. To cleanup I would suggest to list all rbd images in your pool(s) rbd -p <pool> ls and see if you can map all of them to existing instances/volumes.

2019-12-04 04:02:52 -0600 commented question Resizing instances is done without resizing the disk

I don't think it's a bug. There are several options to resize disks during instance creation, e.g. using cloud-init.

2019-12-04 03:58:54 -0600 commented question [ceph] Does replication in osds and pools add up?

If your cluster is nearful (don't let it reach full state!) try to find some clones and/or images you can remove. Using ceph with Openstack can create orphan objects within your pools which you have to cleanup manually, check that too.

2019-12-04 03:56:47 -0600 commented question [ceph] Does replication in osds and pools add up?

Which ceph version are you running? Depending on the version the rados df output shows either net or gross value. My current understand would be: your volumes pool allocated 3 TB data (net), (actually currently uses 1,9 TB because of CoW) and because of replication size 3 the pool requires 9 TB.

2019-12-04 02:05:26 -0600 commented question [ceph] Does replication in osds and pools add up?

The rbd du command shows you the assigned space vs. the actual usage of a pool/image. Ceph rbd objects are usually stored as copy-on-write images. So the rados df command shows the assigned space in total which is around 3TB (net), but currently you're actually using around 1TB.

2019-12-03 09:18:54 -0600 commented question [ceph] Does replication in osds and pools add up?

How do you count 9 replicas? First, all your pools show replicated size 3. Then you can check the number of objects for each pool (from your rados df output), the column for "OBJECTS" reflects the net value of objects while the column "CLONES" shows that value trippled.

2019-12-02 05:42:21 -0600 commented answer Issue while creating an instance in nova (Error out with port binding failure )

Port binding errors usually occur when you created a port as admin user, maybe even in the admin project, and you try to attach it to an instance in a different project. Try to re-create the port with openstack port create ... --project <PROJECT_ID>.

2019-11-22 02:48:15 -0600 answered a question nova rebuild fails when the instance has ceph snapshot

Alright, I got it. So if you rebuild an instance the underlying rbd image has to be deleted to be able to reuse the same ID. You can see this for a really short time if you have running watch -n 0.2 rbd info pool/image_disk while the instance is rebuilding (an instance without a snapshot). But since it's not possible to delete an instance that has rbd snapshots (via horizon or CLI) the rebuild fails and the instance is reverted to a working state.

2019-11-22 02:32:51 -0600 commented question nova rebuild fails when the instance has ceph snapshot

Your description is accurate, I was able to reproduce this. I'll try to find out more.

2019-11-22 02:07:22 -0600 commented question nova rebuild fails when the instance has ceph snapshot

I haven't done much rebuilding, so my experience is limited here. But have you turned on debug logs for nova? I'd expect to see the commands it's trying to execute, maybe there's a hint what could have gone wrong. If I have the time I'll try to reproduce that.

2019-11-22 02:00:59 -0600 commented question Can I remove a project from a user if the project no longer exists?

I had the same issue a while ago, I had to change the respective entries in the keystone database. I'd be happy to hear if there are easier ways. But as always: take a backup before changing the database and be really careful!

2019-10-30 03:47:13 -0600 commented question Resizing instances is done without resizing the disk

and only after that you can run pvresize and lvresize. If this doesn't apply to your case and the underlying disk really stays the same you'll have to provide more details and turn on debug logs for nova.

2019-10-30 03:45:05 -0600 commented question Resizing instances is done without resizing the disk

Does the new flavor offer a larger disk? Usually, you'll have to resize the physical volume inside the instance before you can use the additional space. So after resize the underlying disk (e.g. /dev/vda) should be larger (check with lsblk), then you need to extend partitions (fdisk) ...

2019-10-24 05:06:15 -0600 commented question Openstack Component not working

I'm totally unfamiliar with juju, but it sounds like the ceph cluster isn't built correctly. If the message is correct and the MONs have been deployed successfully, can you run ceph -s on one of the MONs and share the output? There are probably no (or not enough) OSDs yet.

2019-10-23 10:42:08 -0600 commented question OpenStack vm not getting internet

This is probably the most asked question on this forum, have you used the search field? There are lots of things to try. Have you checked the security-group settings?

2019-10-17 09:42:13 -0600 commented question Glance timeout writing chunk

It's not necessarily a timeout, have you checked the ceph cluster for logs/hints? Someone else reported the same error for filestore, but no solution yet. Do other images work? Is this image larger than others?

2019-10-11 00:55:04 -0600 commented question instances in provider network cannot reach metadata agent

For provider networks I usually use config drive because it's an external network and it isn't handled by neutron.

2019-10-02 09:05:49 -0600 commented question Changing Openstack Keystone admin endpoint

Without testing it have you tried openstack endpoint set --interface admin --url <URL> --service keystone <ENDPOINT_ID>?

2019-09-25 05:34:34 -0600 commented question Can't Resolve nova-api Error

You can always edit your question to change the format ;-) What do you mean "apache2 was moving"? Has your issue been resolved? If so could you close your question and mark is as "solved"?

2019-09-24 01:57:59 -0600 commented question Can't Resolve nova-api Error

Please don't use quotes but code tags to format your question. There's something wrong with your authentication as the logs say:Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. Double check your url's, is apache running?

2019-09-20 06:13:05 -0600 commented question can I temporarily stop openstack from rewriting the libvirt XML?

...but it didn't work for the named reason. Depending on the changes you need to make this can be done using the instance's metadata or image metadata. Maybe someone else has more information how to do that.

2019-09-20 06:10:18 -0600 commented question can I temporarily stop openstack from rewriting the libvirt XML?

If it's just for testing purposes you could test xml changes without the openstack layer. I do that sometimes by creating (or copying an existing vm xml) my own xml file, then virsh define <xml file> and virsh start <vm name> to see if my changes were successful. I also tried it within openstack

2019-09-04 04:09:42 -0600 commented question openstack group list returns sometimes empty list with LDAP

To me it sounds like a network issue, but have you tried running openstack --debug group list and compare the output from a successful command vs. an empty response?

2019-09-03 08:50:43 -0600 commented question unable to attach volume to instance

Have you tried to manually login to the target?

2019-09-02 05:08:48 -0600 commented question unable to attach volume to instance

You can update your question with the cinder debug output instead of pasting it in a comment, a little more output could help (in regards of which commands possibly fail etc.).

2019-09-02 05:07:49 -0600 commented question unable to attach volume to instance

Are the iscsi targets on a different host than your controller (iscsi_ip_address=192.168.33.235)? Can you see the desired volume connected? compute:~ # iscsiadm -m session should show something like iqn.2010-10.org.openstack:volume-<your_id> if the volume is connected.

2019-08-29 02:07:01 -0600 commented question unable to attach volume to instance

Could it be that your volume-group is full or something like that? Can you show the output of vgs | grep cinder?