Ask Your Question

Uploading files to OpenStack Object Storage (Swift) by many parts

asked 2013-07-05 04:09:05 -0500

Nikita Lipsky gravatar image

updated 2013-07-05 04:18:44 -0500

While uploading files to Swift, we are going to split them to parts. We are going to use it for (large) files synchronization when only changed parts can be uploaded later. But we are not sure what size for a part to choose. If, say, we split files by 10MB parts how it will affect the scalability, RAM usage and the performance of Swift, assuming that a node will have approximately 100TB space?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted

answered 2013-07-19 05:18:04 -0500

clayg gravatar image

10MB is fine, swift can scale to billions of objects.

Read throughput will be faster with larger chunks, because of backend connection overhead it's naturally slower but also for dynamic large objects there's a default artificial rate_limit_segments_per_second which is turned up kinda high - like 1 req/s after the first 10 segments.

Write throughput is faster with smaller chunks. Not only can you have more workers uploading parts of the object simultaneously, you don't have to upload as much to make a single in place update update to a chunk.

Be careful with "synchronization" and "only changed parts" though; since swift prefers availability to strong consistency, read-update has some caveats.

edit flag offensive delete link more


Thank you very much for the answer. How can we estimate faster/slower of read/write? Are there any benchmarks available on the subject that we can use to measure this?

Nikita Lipsky gravatar imageNikita Lipsky ( 2013-07-22 04:05:04 -0500 )edit will be helpful. Also let me know if i can close this question. cheers!!

koolhead17 gravatar imagekoolhead17 ( 2013-11-23 20:51:19 -0500 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2013-07-05 04:09:05 -0500

Seen: 1,704 times

Last updated: Jul 19 '13