Revision history [back]

" I have read this, and it would imply that when a single drive fails on a node; that node no longer accepts writes; this would imply that a drive failure on a single node, which is designated as a single zone renders the entire zone non writable - whether than zone contains 3 drives or 100. - So, can the 'handoff node' actually potentially be the same node but a different drive ?"

You're describing Hadoop HDFS here. 1 drive failure means exactly 1 drive failure in Swift Swift works with partitions, partitions are distributed between devices, nodes are a bunch of devices. If one device fails it just means that 1 replica of each partition that was on this device will go to another device, that's it. Node will not fail, Zone will not fail, etc.

"If a drive drive is failed , Swift does not work to replicate the data from that drive to another drive "

Incorrect. Object replicator will replicate data from one drive to other drive.

" That is, since a node is defined by it's IP address so, as long as you rebuild the swift install with the same IP - no ring updates are required and only modified / new data will be copied back to that node - os this the case ?"

Each node is a bunch of devices. If all devices fail. Then replacing them all with some new devices on the same IP will do the trick just fine.