Ask Your Question
0

adding space to swift

asked 2014-05-08 06:06:05 -0500

blazesurfer gravatar image

hi all

My swift cluster has run out of space.. best way to add space? ive added more disk but not sure if i have done the balancing correctly.

container.builder, build version 4
262144 partitions, 1.000000 replicas, 1 regions, 1 zones, 4 devices, 0.00 balance
The minimum number of hours before a partition can be reassigned is 1
Devices:    id  region  zone      ip address  port      name weight partitions balance meta
             0     1     1       127.0.0.1  6011      sdb1   1.00      65536    0.00
             1     1     1       127.0.0.1  6011      sdc1   1.00      65536    0.00
             2     1     1       127.0.0.1  6011      sdd1   1.00      65536    0.00
             3     1     1       127.0.0.1  6011      sde1   1.00      65536    0.00

all .builder files are the same.

Filesystem      Size  Used Avail Use% Mounted on
/dev/sda2        28G  2.6G   25G  10% /
udev            1.9G  140K  1.9G   1% /dev
tmpfs           1.9G     0  1.9G   0% /dev/shm
/dev/sdb1       2.0T  2.0T   22G  99% /media/swift/sdb1
/dev/sdc1       2.0T  2.0T  7.7G 100% /media/swift/sdc1
/dev/sdd1       2.0T  2.6G  2.0T   1% /media/swift/sdd1
/dev/sde1       2.0T  2.9G  2.0T   1% /media/swift/sde1

i want to get the sdd and sde to balance. so i can keep using or access the data on sdc.

any advice?

my next stage after that is to build this out to a cluster.

Thanks in advance.

edit retag flag offensive close merge delete

Comments

Being the file system is XFS ( this is a virtual machine in vmware - dev) can i expand by adding more disk and expanding the XFS

blazesurfer gravatar imageblazesurfer ( 2014-05-08 10:00:33 -0500 )edit

1 answer

Sort by ยป oldest newest most voted
1

answered 2014-05-08 10:08:54 -0500

notmyname gravatar image

Yes, you've got the right idea. Make sure replication is running, and Swift should move the data around for you.

I've got a couple of suggestions for your setup, though. First, you are using a weight of "1". There's nothing wrong with this, but many people find it simpler to use the number of GB available as the weight. For example, you've got 2TB drives, so use a weight of 2000. This has the benefit of being able to simply calculate the weight for new devices when you add them to your cluster, especially when they are different sizes.

I want to be clear, the weight is a dimensionless number that only makes sense in relation to the other weights in the cluster. See https://swiftstack.com/blog/2012/04/0... for more information on this.

Second, I notice that you are using one replica. Note that Swift can't protect against hardware failures if you only have one replica. Normal practice is to use three replicas to provide high durability and high availability of your data, even when you have hardware failure and capacity adjustments.

edit flag offensive delete link more

Comments

hi thanks for you response

thanks for your response, yes i have built a second cluster to migrate to.

3 node with same drive arrangement with weight of drive size.. this was a dev test that has been turned live basicly. though data being written to it grew alot quicker then was expected. im going to have to migrate the data off this swift and onto my other i have built.. or potentially bring them together.

i can change the replica and zones on the rings cant i ? its just the partitions I cant change, if im thinking correctly.

im getting sync error in syslog that ive got to work out.

Thank you again for your time and response. now i need to work out my best way to migrate from my grizzly cluster to the icehouse build ive just deployed or if i can add the new nodes to ...(more)

blazesurfer gravatar imageblazesurfer ( 2014-05-08 10:23:26 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2014-05-08 06:06:05 -0500

Seen: 153 times

Last updated: May 08 '14