Ask Your Question

Revision history [back]

This is a bit of new frontier so things are still expirimental. At Rackspace we're currently running account, container, and object servers on every "storage" node with separate nodes for our proxy servers (about 1:12 proxy:storage node ratio). But we're moving towards separate account/container servers from the object servers now, as the performance characteristics needed for account/container servers differs from the object servers (account/container really need lots of IOPS).

So, sorry to be vague, but it's not an exact science yet, and would really depend on your available hardware and usage patterns. The current thought on ring size is devices_at_max_cluster_size*100 partitions. So, if you new you wanted to make a separate cluster (due to network bandwidth saturation, physical datacenter space, whatever) once you reached 10,000 devices, you'd want 1,000,000 partitions, or "partition power" of 20. If you knew for the same cluster you'd want only 1,000 account/container device, you'd want 100,000 partitions for those rings, or "partition power" of 17.

Ring partitions simply define the granularity of the data chunks that can be moved to balance space/load throughout the system. 100 partitions per device means that you can "up" or "down" a device's load by roughly 1%. Less partitions means less granularity but more efficient replication and inode usage. The partition count also determines the absolute max devices you can have for what that ring manages. For instance, if you only had 128 partitions, adding a 129th device means it'd take a partition from another device, leaving it with none.