Why do some of my existing instances receive the wrong Ceph monitor address with which to connect after adding / removing a controller?

asked 2016-10-25 05:18:31 -0500

I deployed an OpenStack environment using Fuel that originally had 1 controller and 3 compute nodes backed by Ceph. After some time we realized the controller did not have enough resources so we added 2 new controllers and removed the original controller. Now several of my existing instances are still receiving the original controller's storage IP as the Ceph connection address for specific volumes:

  <source protocol='rbd' name='volumes/volume-cfdf3d28-92f9-4766-af6b-f4eba5911fca'>
    <host name='172.16.1.2' port='6789'/>

I've checked on disk and in certain portions of the database but can't seem to find where this data would be stored. "172.16.1.2" is the IP address of the old controller. This file is dynamically generated so I would need to correct the source of the issue, w/e that is.

edit retag flag offensive close merge delete