Blockquote

During a recent update of servers, (including firmware and BIOS updates), selinux was changed to enforcing and OpenStack was restarted. All processes were found to be active, so the servers were rebooted. Following the reboot, all OpenStack processes were one again found to be active (and stable). However, when attempting to start an instance using block storage (LVM/ISCSI using lioadm) this failed to find the block storage path under /dev/disks/by-path. further investigation showed that PCI devices were listed under the path, but not ISCSI targets. It is possible to list ISCSI objects using :

targetcli ls and iscsiadm -m node ...

and its also possible to see the block storage targets using the ls, LVM and mapper commands

ls -l /dev/mapper
ls -l /dev/vg_cinder/*
lvs
dmsetup ls


Etc...

So I should be able to recreate the entries under /dev/disks manually. However these aren't persistent across reboots and are not being recreated by:

udevadm trigger


I suspect that the cinder/nova processes are somehow responsible for the management of the links and that in implementing selinux in enforcing mode I've managed to drop those links and have those changes committed to a DB table so they're no longer associated with an instance. Could someone explain how these links are managed by OpenStack and how I might recreate these links to restore links that are persistently managed over reboots?

Update: This appears to be an issue with the filtering of guest volume groups from the host machine when the guest VG is created on an additional volume.

e.g
create a volume in openStack
Attach to instance,  -> presented as /dev/vdb,
On the instance:
pvcreate /dev/vdb
vgcreate vg_name /dev/vdb


Then, if/when the LV metadata gets rescanned on the host, it will find the VG UUID, but not the PV UUID, so you get behaviour like that discussed here:

filter = [ "a|^/dev/sda|", "a|^/dev/sdb|", "r|.*|"]