Ask Your Question

plobo's profile - activity

2019-11-11 09:25:30 -0500 received badge  Famous Question (source)
2019-09-19 05:23:54 -0500 received badge  Popular Question (source)
2019-06-14 09:39:58 -0500 asked a question [kolla-ansible] - Connect to previous ceph deploy

Hey all,

Doing some testing and would like to ask for some pointers to further my tests. I've successfully set up a cluster using kolla-ansible with Ceph and numerous other services. My tests have gone well (barring a few initial issues) and migration from oVirt also tested well. I'm finally at the point where I'm testing and looking into DR and Backups. I've read the documentation pertaining to MySQL backup and restore, that is clear enough. I've also checked out backing up and restoring Ceph in case of major outage, what I can't quite figure out is the following scenario:

For some reason, I need to re-deploy my cluster, however ceph nodes are still fine and didn't loose data. I'm sure that the restore will point everything to the right place and I'll be able to pick up where I left off. What I need help figuring out so far is how I connect my previous ceph nodes to the new deploy.

I've done a redeploy and everything seems fine, however none of my OSDs show up. I need to bootstrap my ceph nodes again for kolla-ansible to configure things properly. I have zero doubt it's a stupid issue on my end and something I'm missing for sure, so would appreciate a pointer in the right direction.


2019-04-21 07:24:00 -0500 received badge  Notable Question (source)
2019-03-18 07:45:58 -0500 commented question OVS ignoring new floating IPs

However, this time I don't have the urgency I did last time so will be able to dedicate more time to troubleshooting this and would therefore appreciate any pointers and guidance.

@Eduardo: Please let me know what more information I can provide to help try pinpoint the issue and solution

2019-03-18 07:44:24 -0500 commented question OVS ignoring new floating IPs

Hi all, seems like the issue has managed to creep back in. neutron_openvswitch_agent container was restarted 24h ago for some reason and now, newly created floating IPs are once again being ignored. I won't be able to troubleshoot this since I'm away for another week and a half.

2019-03-05 04:00:14 -0500 commented question OVS ignoring new floating IPs

Thanks Eduardo. I'll keep you posted if this happens again and I manage to get some more info from it.

2019-03-05 03:59:34 -0500 received badge  Popular Question (source)
2019-03-01 09:24:02 -0500 answered a question OVS ignoring new floating IPs

Network restart and restarting kolla containers did nothing, however rebooting the server solved the issue. I can only assume that for some odd reason, osvdb wasn't pickling up changes madeā€¦?

I really hate not knowing what was causing this (especially since I fear it may happen again), so while this situation is "solved" (had to try reboot since I had some users with urgency) I'd still love some pointers on what possibly was happening and how to best troubleshoot and solve this in future scenarios.

Now with a little less "pressure" I can look into this a little more calmly so would appreciate the pointers and guidance.

Thanks, P.

2019-03-01 08:13:24 -0500 commented question OVS ignoring new floating IPs

Yeah, thinking that too. Any existing floating IP is working just fine, so has to be something specific to new floating IPs and OVS

2019-03-01 04:39:45 -0500 commented question OVS ignoring new floating IPs

Don't think that's the issue as we're using a /24 and only have 10 used IPs. Like I said in original post, existing IPs are working fine. I can even move them from 1 instance to another and they work. New floating IPs don't however so seems like something is not getting configured along the way

2019-02-28 07:50:40 -0500 asked a question OVS ignoring new floating IPs

Hi all,

Hoping you'll be able to help shed some light or point me in the right direction.

I'm currently faced with a situation very similar to what is described in this (question) with a minor difference. Whereas in the previous question, the OP couldn't connect to any floating IP, I am able to connect to floating IPs which already existed.

A little background

  • Cluster deployed with Kolla Ansible using Rocky.
  • Cluster comprised of:
    • 2 Compute nodes;
    • 3 Control/Network nodes;
    • 4 Storage nodes;
    • 4 VLANs configured for neutron_external, tenant_networks and 2x ceph networks

Cluster was working fine a couple of weeks ago. Instances were created without issues and floating IPs assigned and working well. Yesterday, when creating a new instance and assigning a floating IP, all seemed to work normally except I can't access the instance via floating IP. Internal networking is ok (I can ping and ssh between instances) and I can access the outside network from the instance. Just can't access via floating IP.

When troubleshooting, I noticed that the neutron_openvswitch_agent container had been restarted a week ago and seems to coincide with the time this issue may have started. Also, on the compute/control nodes, the vxlan_sys_4789 interface appears as disconnected/down.

I'd appreciate some pointers in the right direction as to how I can further troubleshoot this and correct this issue.

Like I said, existing floating IPs are working just fine. I even removed a floating IP from an existing instance and assigned it to the new instance and it works (I can access new instance using old floating IP), so this seems to be isolated to newly created floating IPs.

Thanks in advance,

2019-01-25 10:29:18 -0500 received badge  Supporter (source)
2019-01-25 10:28:54 -0500 received badge  Notable Question (source)
2019-01-10 20:18:21 -0500 received badge  Popular Question (source)
2019-01-10 03:21:31 -0500 commented answer [kolla-ansible] Dimensions in globals.yml not applied

Thanks Eduardo! Still fresh with ansible so totally missed that.

2019-01-08 08:51:24 -0500 received badge  Organizer (source)
2019-01-08 08:33:57 -0500 asked a question [kolla-ansible] Dimensions in globals.yml not applied

Hey all. I'm currently testing a deploy using kolla-ansible. All went fine until HOrizon started acting up and taking 10-15 seconds to display. Doing a little research I quickly arrived at the conclusion that it could be related to memory consumption on my control nodes and in fact, neutron-openvswitch-agent container was using up almost all the memory available. Therefore, I attempted to add some restrictions using dimensions and reconfigured services with kolla-ansible -i multinode reconfigure but when inspecting the container I can see that no constraints were applied.

    mem_limit: "3g"
    memswap_limit: "4g"
    mem_reservation: "2g"

_globals.yml excerpt_

After changing globals.yml, I tried reconfigure, upgrade and even deploy but the containers don't seem to apply the constraints:

# docker container inspect neutron_openvswitch_agent --format='{{json .HostConfig}}' | jq
"Memory": 0,
"MemoryReservation": 0,
"MemorySwap": 0,

Most likely I'm doing something incorrectly here and may have misread how the constraints are supposed to be applied, so any indication in the right direct will be greatly appreciated.


2016-03-09 05:54:21 -0500 commented question Image won't boot. "...trying to set up timer as Virtual Wire IRQ..."

Were you able to get this working? I'm having similar issue with a CentOS image.

Testing OpenStack Liberty deployed with Fuel on Virtualbox. Cirros image is just fine, but so far no luck with any CentOS image.