Ask Your Question

Overhead for storing each object

asked 2011-06-14 19:50:08 -0500

lardcanoe gravatar image

I was curious how much space each object affects the sqllite DB's so I uploaded 50,000 0B objects. I performed a "df -B 1" before the upload, and one an hour afterwards (once it stabilized). I was surprised to see that it came to about 250B per object. Granted, that is only about 250GB for 1 billion objects, so it really isn't bad at all. Does this sound about right? I'm also using this to plan out the growth of the account and container services since I want to move them to their own servers.

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted

answered 2011-06-14 21:50:12 -0500

lardcanoe gravatar image

Thanks gholt, that solved my question.

edit flag offensive delete link more

answered 2011-06-14 20:56:00 -0500

gholt gravatar image

With our use case, we see about 0.12% overhead on the account and container databases compared to the actual object data. This doesn't count file system overhead of inodes, dirents, etc.

It shouldn't be too bad but also depends on how long your names are, content-type strings are, average object data size, etc.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower


Asked: 2011-06-14 19:50:08 -0500

Seen: 49 times

Last updated: Jun 14 '11