Ask Your Question

Revision history [back]

click to hide/show revision 1
initial version

RDO, multi node, storage clarification questions

Hi all, I have been testing out RDO recently and have pretty good success, but some confusion and questions I wonder if anyone can comment on.

  • I've got 4 systems, each with 4Tb local raid disk; my desire was to have all 4 work as a single openstack cluster such that one host would have most management role (plus compute) and other 3 would be .. more compute resource.
  • I have had some success with setup, with neutron networking, but I'm a bit confused about storage.
  • I thought? I might simply get neutron installed on all nodes, and then configure a local large LVM volume group, "cinder-volumes" on each host.
  • I tried to achieve this via a packstack answers file config stanza, as follows

    # The IP address of the server on which to install Cinder
    CONFIG_CINDER_HOST=192.168.1.21
    CONFIG_CINDER_HOST=192.168.1.22
    CONFIG_CINDER_HOST=192.168.1.23
    CONFIG_CINDER_HOST=192.168.1.24

(However - I am not certain - if I am permitted to have cinder_host setup on multiple instances / or if RDO install is assuming otherwise - but it appears to not break the installer .. and if I add nodes one at a time, they do appear to each get cinder installed..?)!

  • my baseline plan was ~100gig for the root filesystem, and the rest for cinder-volumes LVM volume group.
  • but my first pass on this config - I can't see from web GUI, any space available beyond the ~100gig root filesystem (?)
  • I tried a second config - single large root filesystem, no cinder-volumes LVM - and this was not good.

I'm wondering what people might advise, if there is a 'good' way to use the storage in this hardware in a fairly balanced manner. Clearly I'm not worried about live migration / shared storage requirements for VMs. I just want to be able to provision VMs easily, via the Openstack API / and have each node use its own storage resources.

Any comments or pointers are greatly appreciated.

Many thanks,

Tim