why remove_exports is called in def terminate_connection of cinder/volume/manager.py in stable/icehouse branch.
During live migration the def terminate_connection is called by the cinder-volume service. This definition has operation to unexport the iscsi target this causing me issues after live-migration of guest VM. The issue is when the destination node tries to login to the iscsi target by that time the remove_export unexports the iscsi target(removes the exported iscsi target) causing the dest iscsi login with no response from iscsi target which in turn resulting in I/O errors in dest node.
On commenting the lines below i could perform a successful live-migration without I/O errors.
below is the manager.py code from /cinder/cinder/volume/manager.py
Before:
def terminate_connection(self, context, volume_id, connector, force=False):
"""Cleanup connection from host represented by connector.
The format of connector is the same as for initialize_connection.
"""
# NOTE(flaper87): Verify the driver is enabled
# before going forward. The exception will be caught
# and the volume status updated.
utils.require_driver_initialized(self.driver)
volume_ref = self.db.volume_get(context, volume_id)
try:
conn_info = self.driver.terminate_connection(volume_ref,
connector,
force=force)
print "chandra cinder.volume.managerpy terminate_connection self.driver.terminate_connection check conn_info",conn_info
print "chandra cinder.volume.managerpy terminate_connection self.driver.terminate_connection is not implemented yet"
# NOTE(skolathur): If volume_type is fibre_channel, invoke
# FCZoneManager to remove access control via FC zoning.
if conn_info:
vol_type = conn_info.get('driver_volume_type', None)
mode = self.configuration.zoning_mode
LOG.debug(_("Zoning Mode: %s"), mode)
if vol_type == 'fibre_channel' and self.zonemanager:
self._add_or_delete_fc_connection(conn_info, 0)
except Exception as err:
err_msg = (_('Unable to terminate volume connection: %(err)s')
% {'err': err})
LOG.error(err_msg)
raise exception.VolumeBackendAPIException(data=err_msg)
try:
LOG.debug(_("volume %s: removing export"), volume_id)
self.driver.remove_export(context.elevated(), volume_ref)
print "chandra cinder.volume.managerpy terminate_connection self.driver.remove_export"
print "chandra cinder.volume.managerpy terminate_connection self.driver.remove_export is not implemented yet"
except Exception as ex:
LOG.exception(_("Error detaching volume %(volume)s, "
"due to remove export failure."),
After:
def terminate_connection(self, context, volume_id, connector, force=False):
"""Cleanup connection from host represented by connector.
The format of connector is the same as for initialize_connection.
"""
# NOTE(flaper87): Verify the driver is enabled
# before going forward. The exception will be caught
# and the volume status updated.
utils.require_driver_initialized(self.driver)
volume_ref = self.db.volume_get(context, volume_id)
try:
conn_info = self.driver.terminate_connection(volume_ref,
connector,
force=force)
print "chandra cinder.volume.managerpy terminate_connection self.driver.terminate_connection check conn_info",conn_info
print "chandra cinder.volume.managerpy terminate_connection self.driver.terminate_connection is not implemented yet"
# NOTE(skolathur): If volume_type is fibre_channel, invoke
# FCZoneManager to remove access control via FC zoning.
if conn_info:
vol_type = conn_info.get('driver_volume_type', None)
mode = self.configuration.zoning_mode
LOG.debug(_("Zoning Mode: %s"), mode)
if vol_type == 'fibre_channel' and self.zonemanager:
self._add_or_delete_fc_connection(conn_info, 0)
except Exception as err:
err_msg = (_('Unable to terminate volume connection: %(err)s')
% {'err': err})
LOG.error(err_msg)
raise exception.VolumeBackendAPIException(data=err_msg)
#try:
# LOG.debug(_("volume %s: removing export"), volume_id)
# self.driver.remove_export(context.elevated(), volume_ref)
# print "chandra cinder.volume.managerpy terminate_connection self.driver.remove_export"
# print "chandra cinder ...
I faced the same issue, I don't understand why the target is removed. Commenting this line the problem is gone and I can do live migrations without problems. Thanks for the trick.