Ask Your Question
0

Is it possible to skip releases when upgrading with Kolla?

asked 2019-09-12 07:09:52 -0500

bjoernh gravatar image

Hi all,

we have a production environment deployed with Kolla-Ansible using the Queens release. We would like to upgrade to Stein, as Queens is running out of maintenance at the end of October.

I have created a testing environment in the cloud resembling the essential parts and configuration of our production environment. I can easily deploy Queens, Rocky, or Stein using Kolla-Ansible. However, when trying to upgrade from Queens to Rocky, results are at least unstable.

The upgrade from Rocky to Stein worked quite well, and I was able to somehow do a 3/4 + 1 step upgrade from Queens to (Rocky) to Stein, where the upgrade to Rocky was about 3/4 successful. However, I am uncertain as to the completeness and quality of this upgrade. The Glance registry was removed, that's a no-brainer. Other than that, everything seems to work at first glance. Heat caused some trouble, which is why I removed it from the upgrade and then enabled and deployed it again afterwards. This would be a viable solution for the production environment, as Heat is currently not used.

Does anyone have any experience with such upgrades? Has anyone tried to skip a release and do a direct upgrade?

Thanks and best regards, Björn

edit retag flag offensive close merge delete

2 answers

Sort by » oldest newest most voted
0

answered 2020-03-26 07:11:24 -0500

bjoernh gravatar image

Hi all,

we finally made the upgrade from Queens to Train and actually skipped Rocky and Stein in between. However, there were a number of issues to be dealt with, so the short answer to this question would be "No, try to avoid it".

The most difficult service in this upgrade was Cinder. Its migrations repository in the Train release starts at version 123, whereas the Queens release was at migration 117. They do that to reduce the size of the migrate_repo. Because of this problem, the upgrade of cinder did not fully work and had to be aided manually by adding the missing migrations from the source code. Watch out, 123 originally introduces a field in the transfers table, so you cannot replace the db_init from Train with a simple no-op migration. I had to manually twiddle with that table to introduce the field in the end, because I hadn't noticed this change. Because of this migration problem, I guess, the cinder (v1) service remained as a registered service, keeping Horizon from displaying instances and volumes properly. Apparently, Horizon prefers the v1 interface, which is not available anymore in Train.

The other parts that I haven't quite solved yet is federated authentication, which seems to be rarely used in OpenStack, but is crucial in our academic environment. Will have to solve that now.

I came across few other issues during the upgrade, but all were solvable and the answers could be found easily on the web:

  1. Upgrade stopped at "Waiting for virtual IP to appear": solution here.
  2. Upgrade stopped at "MariaDB update": a simple mariadb_recovery solved the problem.
  3. Upgrade stopped at "service-ks-register : placement | Creating services": it is not quite clear how I solved this problem in the end. I saw errors in Keystone that led me to believe that SELinux may have caused the problem. In our staging environmen, I also deleted and redeployed the keystone services. In the production environment, this did not help. Was solved in the end, but left an uneasy feeling.

These three problems occured consistently in the staging and production environments. Our staging environment was incomplete and did not have cinder deployed, so those problems went unnoticed during the test run.

I hope this short report is valuable to anyone. My general recommendation remains to not skip releases.

Best regards, Björn

edit flag offensive delete link more
0

answered 2020-03-27 23:38:35 -0500

Devendra_Singh_Balihar gravatar image

While there may be some cases where it is possible to upgrade by skipping this step (i.e. by upgrading only the openstack_release version) - generally when looking at a more comprehensive upgrade, the Kolla-ansible package itself should be upgraded first. This will include reviewing some of the configuration and inventory files. On the operator/master node, a backup of the /etc/kolla directory may be desirable.

If upgrading from 5.0.0 to 6.0.0, upgrade the kolla-ansible package:

pip install --upgrade kolla-ansible==6.0.0

Ref. link: https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html (https://docs.openstack.org/kolla-ansi...)

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2019-09-12 07:09:52 -0500

Seen: 89 times

Last updated: Mar 26