Single Container Performance Limitations with large object size

asked 2014-08-13 21:42:33 -0500

We are deploying the Swift private cloud in our labs and have come across a lot of blogs mentioning the 1M objects performance limitation for the SQLite/container. The over implementation is Petabytes of data with multi-million number of objects. Using multiple containers is not an option. Some of the blogs mention, using SSDs to avoid this problem. My question is:

  1. For large object size 2-4MB, would the 1M performance issue still exist. Since, in this case, the object would have to be written to the storage node (possibly with HDD). Would that not offset the container performance? Basically, if you are bottlenecked at storage node, container bottleneck should not matter?

  2. Would it make sense to use SSDs in this case?

  3. Are there any performance numbers with large object size (2-4MB) impact on SQLite performance?

  4. If we increase the object size to say 128MB would that offset the performance issue? (at this point wouldn't the storage node definitely become the bottleneck?)

Thanks,

edit retag flag offensive close merge delete