Ask Your Question
0

cinder libgfapi access not working for Windows 2012r2 hosts

asked 2014-11-09 23:25:43 -0500

OpenStackGossin gravatar image

Hi, Im using IceHouse on Ubuntu 14.04 with a Gluster 3.5.2 storage backend. Cinder is setup to serve volumes from gluster and nova clients should access them with the gluster driver. This is working for Ubuntu instances but not for Windows 2012r2 instances (Im using the 2012r2 image from cloudbase.it). The OpenStack error message is DeviceIsBusy so I dig a little deeper by trying this manually from the compute node where the instances are running. Qemu recognizes the volume with libgfapi access:

root@compute08:~# qemu-img info gluster://10.30.30.104:24007/CinderSAN/volume-1bd9d089-714a-4981-be9b-1dc30c65903a
image: gluster://10.30.30.104:24007/CinderSAN/volume-1bd9d089-714a-4981-be9b-1dc30c65903a
file format: raw
virtual size: 1.0G (1073741824 bytes)
disk size: 0
[2014-11-10 05:06:13.054978] E [afr-common.c:4168:afr_notify] 0-CinderSAN-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.

so I create a xml definition of this volume:

root@compute08:~# cat gluster.xml 
<disk type='network' device='disk'>
   <driver name='qemu' type='raw' cache='none'/>
   <source protocol='gluster' name='CinderSAN/volume-1bd9d089-714a-4981-be9b-1dc30c65903a'>
        <host name='10.30.30.104' port='24007'/>
    </source>
   <target dev='vdd' bus='virtio'/>
</disk>

which attaches fine to the Ubuntu host:

root@compute08:~# virsh attach-device fd2ffd38-4cc6-4f8e-a0d0-4d44130dc627 gluster.xml 
Device attached successfully
root@compute08:~# virsh domblklist fd2ffd38-4cc6-4f8e-a0d0-4d44130dc627
Target     Source
------------------------------------------------
vda        /var/lib/nova/instances/fd2ffd38-4cc6-4f8e-a0d0-4d44130dc627/disk
vdd        CinderSAN/volume-1bd9d089-714a-4981-be9b-1dc30c65903a
root@compute08:~# virsh detach-device fd2ffd38-4cc6-4f8e-a0d0-4d44130dc627 gluster.xml 
Device detached successfully

while on the windows 2012r2 instance its not working:

root@compute08:~# virsh attach-device 884ec98e-1d83-440e-ab53-87457879564b gluster.xml 
error: Failed to attach device from gluster.xml
error: operation failed: open disk image file failed

However, a simple file-backed volume mounts fine on windows 2012r2:

root@compute08:~# cat file.xml 
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none'/>
      <source file='/tmp/test.img'/>
      <target dev='vde' bus='virtio'/>
    </disk>
root@compute08:~# virsh attach-device 884ec98e-1d83-440e-ab53-87457879564b file.xml 
Device attached successfully
root@compute08:~# virsh domblklist 884ec98e-1d83-440e-ab53-87457879564b
Target     Source
------------------------------------------------
vda        /var/lib/nova/instances/884ec98e-1d83-440e-ab53-87457879564b/disk
vde        /tmp/test.img

so the problem appears to be limited to gluster libgfapi access for windows 2012r2 instances only, any help greatly appreciated.

edit retag flag offensive close merge delete

Comments

I believe you created gluster volume " gluster://10.30.30.104:24007/CinderSAN/volume-1bd9d089-714a-4981-be9b-1dc30c65903a" via cinder create --image-id Windows-Glance-image-id --display_name Windows-whatever 25 G. If I am correct you should have been creating boot able cinder volume.

dbaxps gravatar imagedbaxps ( 2014-11-10 02:30:27 -0500 )edit

When you attach this cinder volume to Ubuntu instance, the fact that cinder volume is boot able is not important. Attempt to attach same volume to Windows2012r2 instance will result both vda && vdd to be boot able.

dbaxps gravatar imagedbaxps ( 2014-11-10 02:44:17 -0500 )edit

My question to you : Could you boot nova instance via cinder volume Windows-whatever ( not via glance image) ? What kind of setup ML2&OVS&VXLAN ( or VLAN or GRE) are you using ?

dbaxps gravatar imagedbaxps ( 2014-11-10 02:55:09 -0500 )edit

if I create a cinder volumne with windows and boot from it, it does not work, gives me "Instance failed block device setup" in log. Im using ml2 and vlan networking. I can always get this to work by switching back to fuse-based gluster access, but I would like the improved performance of libgfapi

OpenStackGossin gravatar imageOpenStackGossin ( 2014-11-10 06:36:08 -0500 )edit

I believe , that it works please view https://ask.openstack.org/en/question...
and testing just done by myself on TwoNode CentOS 7 RDO Juno Gluster 3.5.2 Cluster.
Also view http://blog.nixpanic.net/2014_07_01_a...

dbaxps gravatar imagedbaxps ( 2014-11-10 09:55:21 -0500 )edit

2 answers

Sort by ยป oldest newest most voted
1

answered 2014-11-11 07:54:47 -0500

OpenStackGossin gravatar image

updated 2014-11-15 03:12:09 -0500

thank you so much dbaxps, your answer (and the answer in https://ask.openstack.org/en/question... ) is not entirely correct since we do not have to boot from a cinder volume to use native gluster, but you helped me (and inspired me) to find the solution.

I created an instance booted from a Cinder volume as you pointed me to

$ nova boot --flavor Win2012R2Server8GB --block-device source=image,id=0735e20f-6c0d-4983-b0f9-943006bdd455,dest=volume,size=32,shutdown=preserve,bootindex=0 WinCinderVolumeBoot

This failed since my Gluster-SAN and network is slow:

$ grep waited nova-compute.log
2014-11-11 08:35:57.699 3485 TRACE nova.compute.manager [instance: 017dc018-310e-4267-888c-a07e41b527d6] VolumeNotCreated: Volume e1f6fc20-9ca6-408f-a80d-5997e9acf51d did not finish being created even after we waited 231 seconds or 180 attempts.

so I had to replace max_tries=180 with max_tries=1800 in nova/compute/manager.py (ugly hack for now since I cannot upgrade OpenStack yet, but this is fixed in Juno and backported to IceHouse according to https://bugs.launchpad.net/nova/+bug/... ).

The instance then ended up in task state "Spawning" forever, with the error

 $ tail -n 1 /var/log/libvirt/qemu/instance-00000566.log 
[2014-11-11 07:42:59.626291] E [glfs.c:180:create_master] 0-glfs: master xlator for CinderSAN initialization failed

and at the same time I found

$ grep DENIED /var/log/kern.log
Nov 11 08:42:59 compute08 kernel: [3850813.491645] type=1400 audit(1415691779.622:232): apparmor="DENIED" operation="file_mmap" profile="libvirt-76162a24-3e07-4f96-91ab-c01d6a2096b8" name="/usr/lib/x86_64-linux-gnu/glusterfs/3.5.2/xlator/mount/api.so" pid=77630 comm="qemu-system-x86" requested_mask="m" denied_mask="m" fsuid=151 ouid=0

and this was the solution that fixed the windows issue as well, so in other words: it was an AppArmor problem . It turned out to be a RTFM-case, to enable gluster support in qemu in ubuntu 14.04 we have to upgrade with e.g https://launchpad.net/~monotek/+archi... and it specifically states (this is what I hadn't done):

8.) edit /etc/apparmor.d/abstractions/libvirt-qemu and add:
# for glusterfs
/proc/sys/net/ipv4/ip_local_reserved_ports r,
/usr/lib/@{multiarch}/glusterfs/**.so mr,
/tmp/** rw,

So this now works for a Windows 2012r2 instance booted from a libgfapi gluster cinder volume:

$ virsh dumpxml 02eb8f20-5a45-4fb9-9281-cb46fa2eec27 | grep -E 'source (file|protocol)'
      <source protocol='gluster' name='CinderSAN/volume-fec70f84-2ac6-450f-b85e-04c5eb63a737'>
$ virsh attach-device 02eb8f20-5a45-4fb9-9281-cb46fa2eec27 /root/gluster.xml 
Device attached successfully
$ virsh dumpxml 02eb8f20-5a45-4fb9-9281-cb46fa2eec27 | grep -E 'source (file|protocol)'
      <source protocol='gluster' name='CinderSAN/volume-fec70f84-2ac6-450f-b85e-04c5eb63a737'>
      <source protocol='gluster' name='CinderSAN/volume-1bd9d089-714a-4981-be9b-1dc30c65903a'>
$ virsh detach-device 02eb8f20-5a45-4fb9-9281-cb46fa2eec27 /root/gluster.xml 
Device detached successfully

But it also works for a Windows 2012r2 instance booted from a file-backed disk on a fuse-based gluster mount:

$ virsh dumpxml 884ec98e-1d83-440e-ab53-87457879564b | grep -E 'source (file|protocol)'
      <source file='/var/lib/nova/instances/884ec98e-1d83-440e-ab53-87457879564b/disk'/>
$ virsh attach-device 884ec98e-1d83-440e-ab53-87457879564b /root/gluster.xml 
Device attached successfully
$ virsh dumpxml 884ec98e-1d83-440e-ab53-87457879564b | grep -E 'source (file|protocol)'
      <source file='/var/lib/nova/instances/884ec98e-1d83-440e-ab53-87457879564b/disk'/>
      <source protocol='gluster' name='CinderSAN/volume-1bd9d089-714a-4981-be9b-1dc30c65903a'>
$ virsh detach-device 884ec98e-1d83-440e-ab53-87457879564b /root/gluster.xml 
Device detached successfully
edit flag offensive delete link more

Comments

I am very glad you succeeded with disabling AppArmor. I almost forgot about it , working with Fedoras 20(21) && CentOS 7. Congratulations ! BTW. Final Draft ( inspired by your question ) is here

dbaxps gravatar imagedbaxps ( 2014-11-11 08:07:25 -0500 )edit
2

answered 2014-11-10 09:44:21 -0500

dbaxps gravatar image

updated 2014-11-10 14:26:43 -0500

Two Node CentOS 7 RDO Juno Gluster 3.5.2 Cluster has been used for test. File /etc/nova/nova.conf on compute node updated with entry

qemu_allowed_storage_drivers = gluster

and service openstack-nova-compute restarted
Replicated Gluster Volume "cinder-volumes57" updated per http://blog.nixpanic.net/2014_07_01_a...
Section "Known Issues" contains series of gluster volume set commands for 'qemu' integration with libgfapi to work seamlessly. Cinder volume created for Win2012r2 standard evaluation glance image.Windows 2012r2 instance deployed on Compute Node.
Then follow https://ask.openstack.org/en/question... I verified on Compute node :-

[root@juno2 qemu]# cat /var/log/libvirt/qemu/instance-00000048.log | grep gluster

 /usr/libexec/qemu-kvm -name instance-00000048 -S -machine pc-i440fx-rhel7.0.0,accel=kvm,usb=off -cpu Penryn,+osxsave,+xsave,+pdcm,+xtpr,+tm2,+est,+smx,+vmx,+ds_cpl,+monitor,+dtes64,+pbe,+tm,+ht,+ss,+acpi,+ds,+vme -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 248f8069-ddca-463c-be60-d1c299129779 -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=2014.2-2.el7.centos,serial=5dff0de4-c27d-453d-85b4-b2d9af514fcd,uuid=248f8069-ddca-463c-be60-d1c299129779 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000048.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -no-kvm-pit-reinjection -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=gluster://192.168.1.127:24007/cinder-volumes57/volume-5b44802b-7c3c-46e7-a005-caefec984200,if=none,id=drive-virtio-disk0,format=raw,serial=5b44802b-7c3c-46e7-a005-caefec984200,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:04:0a:a3,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/248f8069-ddca-463c-be60-d1c299129779/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 0.0.0.0:1 -k en-us -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

Now run :-

[root@juno1 ~(keystone_boris)]# cinder list | grep 5b44802b-7c3c-46e7-a005
| 5b44802b-7c3c-46e7-a005-caefec984200 | in-use |  Win2012GLSV  |  20  |   gluster   |   true   | 248f8069-ddca-463c-be60-d1c299129779 |

[root@juno1 ~(keystone_boris)]# nova list
+--------------------------------------+--------------+-----------+------------+-------------+----------------------------------------+
| ID                                   | Name         | Status    | Task State | Power State | Networks                               |
+--------------------------------------+--------------+-----------+------------+-------------+----------------------------------------+
| da16017d-d0ec-4ad4-865f-d8cf228348f8 | UbuntuGLX001 | ACTIVE    | -          | Running     | private_boris=50.0.0.23                |
| 51a20959-0a0c-4ef6-81ec-2edeab6e3588 | VF20RX520711 | SUSPENDED | -          | Shutdown    | private_boris=50.0.0.12, 192.168.1.175 |
| 248f8069-ddca-463c-be60-d1c299129779 | Win2012SRV01 | ACTIVE    | -          | Running     | private_boris=50.0.0.24                |
+--------------------------------------+--------------+-----------+------------+-------------+----------------------------------------+

Now check :-

[root@juno1 ~(keystone_boris)]# nova show 248f8069-ddca-463c-be60-d1c299129779 | grep 5b44802b-7c3c-46e7-a005-caefec984200
| os-extended-volumes:volumes_attached | [{"id": "5b44802b-7c3c-46e7-a005-caefec984200"}]         |

image description

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-11-09 23:25:43 -0500

Seen: 631 times

Last updated: Nov 15 '14