Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

Are /dev/disk/by-id symlinks unreliable?

I am provisioning a Nova server and Cinder volumes using a Heat template.

In order to identify the Cinder volumes attached to my OpenStack server I would like to use the /dev/disk/by-id/... symlinks provided by udev. Unfortunately, these seem to be unreliable. I have the following devices attached to the system:

# find -H /sys/class/block/* -name serial | while read x; do dev=${x%/*}; echo "$dev: $(cat $x)"; done
/sys/class/block/vda: 
/sys/class/block/vdb: f78447c1-8da2-4a68-b
/sys/class/block/vdc: d85a0192-37cf-46f3-8

So there should be symlinks in /dev/disk/by-id for both vdb and vdc. However:

# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root  9 Oct 11 16:44 virtio-d85a0192-37cf-46f3-8 -> ../../vdc

If I run udevadm trigger, the device appears:

# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root  9 Oct 11 17:55 virtio-d85a0192-37cf-46f3-8 -> ../../vdc
lrwxrwxrwx. 1 root root  9 Oct 11 17:55 virtio-f78447c1-8da2-4a68-b -> ../../vdb

But I would like the device to appear reliably on boot. Has anyone seen this behavior before?

Are /dev/disk/by-id symlinks unreliable?

I am provisioning a Nova server and Cinder volumes using a Heat template.

In order to identify the Cinder volumes attached to my OpenStack server I would like to use the /dev/disk/by-id/... symlinks provided by udev. Unfortunately, these seem to be unreliable. I have the following devices attached to the system:

# find -H /sys/class/block/* -name serial | while read x; do dev=${x%/*}; echo "$dev: $(cat $x)"; done
/sys/class/block/vda: 
/sys/class/block/vdb: f78447c1-8da2-4a68-b
/sys/class/block/vdc: d85a0192-37cf-46f3-8

So there should be symlinks in /dev/disk/by-id for both vdb and vdc. However:

# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root  9 Oct 11 16:44 virtio-d85a0192-37cf-46f3-8 -> ../../vdc

If I run udevadm trigger, the device appears:

# ls -l /dev/disk/by-id
total 0
lrwxrwxrwx. 1 root root  9 Oct 11 17:55 virtio-d85a0192-37cf-46f3-8 -> ../../vdc
lrwxrwxrwx. 1 root root  9 Oct 11 17:55 virtio-f78447c1-8da2-4a68-b -> ../../vdb

But I would like the device to appear reliably on boot. Has anyone seen this behavior before?

Update

In response to SamYaple's questions:

Please provide distro and version of udev

This is on Fedora 20, which means udev comes from systemd-208.

Do you see the appropriate symlinks in /dev/disk/by-path or /dev/disk/by-uuid before running udevadm trigger? (UUID obviously only if you have an FS on it)

There is no filesystem on these volumes (they're fresh Cinder volumes), so no UUID. My system does not have a /dev/disk/by-path directory. That said, there are appropriate /dev/vd* device nodes for the attached volumes:

# ls -l /dev/vd*
brw-rw----. 1 root disk 252,  0 Oct 12 23:23 /dev/vda
brw-rw----. 1 root disk 252,  1 Oct 12 23:23 /dev/vda1
brw-rw----. 1 root disk 252, 16 Oct 12 23:22 /dev/vdb
brw-rw----. 1 root disk 252, 32 Oct 12 23:22 /dev/vdc

Even when there are missing /dev/disk/by-id symlinks:

# ls -l /dev/disk/by-id/*
lrwxrwxrwx. 1 root root 9 Oct 12 23:22 /dev/disk/by-id/virtio-0eb06d6e-4d4c-445b-b -> ../../vdc

Can you duplicate this? If so, can you provide instructions?

This appears fairly easy to replicate, although there does seem to be some sort of timing issue involved. Create a OS::Nova::Server resource in a Heat template, create > 1 OS::Cinder::Volume resources, and the same number of OS::Cinder::VolumeAttachment resources.

Boot things a few times.

You are welcome to see the actual templates I am using (the problem happens in instances created from the kubenode.yaml template in that repository).

Are you booting these instances with disks attached, or attaching after booting?

The attachment is handled by Heat. I have not checked the code, but based on the dependencies suggested by the OS::Cinder::VolumeAttachment resource, the attachment probably happens after the Nova server has booted. The OS::Cinder::VolumeAttachment resource looks something like this:

  my_volume_attach:
    type: "OS::Cinder::VolumeAttachment"
    properties:
      instance_uuid: {get_resource: my_server}
      volume_id: {get_resource: my_volume}

The get_resource calls create implicit dependencies on the named resource, which means the named resource will be created first.