Ask Your Question
2

Are there any plans to remove 5GB object size limit in Swift?

asked 2013-10-03 09:26:24 -0600

Mark Parker gravatar image

I am embarking on a development that will use Swift as its backing store. The 5GB size limit will need working around using one of the documented methods. However I will not be delivering the solution for several months so if the object size limit is scheduled to be removed I could code assuming that.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
5

answered 2013-10-06 22:43:51 -0600

notmyname gravatar image

The Swift contstraints, including max object size, are set cluster-wide in the swift.conf config file. See the sample provided ( https://github.com/openstack/swift/blob/master/etc/swift.conf-sample#L15 ) for a description of what is settable and guidance on it.

I'd recommend keeping the limit to the default of 5GB and using the various large object features ( http://docs.openstack.org/developer/swift/overview_large_objects.html ) for storing larger objects. There are a couple of good reasons for keeping the max object size at 5GB.

  1. Objects are uploaded atomically, so it would be a shame to raise the limit to eg 4TB and then have your clients need to keep an upload connection stable for that long. It would be a shame to have a client upload 3.5TB and then have the connection fail and have to start over.

  2. The overall balance of fullness of the drives in your cluster is directly related to the ratio of the max object size to the drive size. That last sentence is a little complex, so let me explain: Swift balances objects across all the storage volumes so that each storage volume gets roughly the same number of objects[1]. Since all objects aren't the same size, there will be some variation in the fullness of individual drives (ie a 5GB object is on one drive but a 5-byte object is on another). As the number of objects increases, this balances out so that roughly all drives are equal in fulness. So, if you have a very large max object size (eg 2TB), the relative fullness of drives will become more unbalanced. Unbalanced clusters are harder to manage and do capacity planning on.

[1] For simplicity, I've not mentioned drive weights as a way to allow for heterogeneous drive sizes. Also note that the max object size can't be larger than the smallest drive in your cluster.

edit flag offensive delete link more
1

answered 2013-10-06 01:16:29 -0600

torgomatic gravatar image

Short answer: no.

Longer answer: if you're running your own Swift cluster, you can configure the maximum object size. However, do keep in mind that each object is stored as a single file on three separate disks, so if you allow 1 TB objects and you have 3 TB disks, then you're much more likely to get full disks than you are with a 5 GB limit. However, if your maximum file size is not too much bigger than 5 GB, then increasing the limit may be a viable option.

edit flag offensive delete link more

Comments

Thanks for the two good answers. I've ticked the longer - purely on the grounds of author effort.

Mark Parker gravatar imageMark Parker ( 2013-10-07 02:10:54 -0600 )edit

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2013-10-03 09:26:24 -0600

Seen: 4,211 times

Last updated: Oct 06 '13