Question about cinder (NFS driver) in compute-node
Hello everyone!
I'm installing Icehouse in a local laboratory. I have a question about cinder with NFS driver.
In my controller server i can install cinder sucessfully with NFS driver and i see a new mountpoint like this
172.16.100.254:/export/cinder/instances 200G 44M 200G 1% /var/lib/cinder/instances/e472b7e691faa518e075b47af8028395
[root@controller01-opslab ~]# cinder list +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+ | 06d86c5e-ce78-4fce-b028-8488f50cecf6 | available | volume01 | 1 | None | true | | +--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
[root@controller01-opslab ~]# ls -lh /var/lib/cinder/instances/e472b7e691faa518e075b47af8028395/ total 18M -rw-rw-rw- 1 root root 1.0G Apr 21 16:00 volume-06d86c5e-ce78-4fce-b028-8488f50cecf6
But when i add my first compute node, i expect view the same mountpoint like controller but i don't.
This is the correct behavior...?
Thanks