Ask Your Question
1

Adding a new datacentre (swift object store)

asked 2015-02-26 08:16:16 -0500

manicguitarist gravatar image

updated 2015-02-27 18:10:27 -0500

smaffulli gravatar image

Running swift v2.3.0. We currently have 1 region, 3 zones with 3 replicas. We have 3 physical machine each with 26 disks.

We are adding a new datacentre and eventually will go to : 2 regions, 3 zones in each region, giving us 6 replicas - with 6 physical machines each with 26 physical disks.

At the moment we have around 6Tb of data in our store (non replicated - 18Tb in total).

My question is - what is the best way to add the new system? Is it better to increase the replica count to 6 and add the new disks in the 2nd region all in one go - or will that result in "extended unavailability" for the data (accessing via a proxy node on the first region)? Or should I go through the pain of very slowly increasing the replica count and the weight of the disks in the second region?

The 2nd way will take months - but I can be sure that there won't be that many partitions in the wrong place.

The first way has the advantage that it shouldn't need to move any of the existing data - merely copy it to the second region - and yes, if we access local proxy nodes on 2nd region the data won't readily be available till it has all replicated.

I guess the real question is that if I do it "all at once" - will the existing partitions be moved around or not?

I copied my ring files to a safe location and did the "all in one" add and rebalanced - the system was in balance (0.01 balance) - but before I use these rings I need to be sure that my customers' data won't just go awol if I push them out...

edit retag flag offensive close merge delete

2 answers

Sort by ยป oldest newest most voted
0

answered 2015-03-02 12:00:13 -0500

zaitcev gravatar image

updated 2015-03-02 12:06:36 -0500

You don't need to overload on replicas like that, althogh the just-in Kota's patch for efficient replication makes it more palatable. Remember that in most clusters, number of zones far exceeds the replication factor (e.g. 15 zones and 3 replicas).

The data is not going anywhere no matter what, but if you go all-out, you're going to degrade the performance of the cluster significantly while replication sorts it out.

_I think_ the old partitions should not move if you do the +3 thing, because the new balance is very good (I am taking about the builder before the "overload"). So, it's a smart call, but I expect it do you no good because of the amount of data you have to transfer anyway to fill out those 3 new replicas. So I suspect that dragging it out would cause less degradation, even though partitions may move due to unbalance at each step.

edit flag offensive delete link more
0

answered 2015-03-13 05:02:59 -0500

manicguitarist gravatar image

Here is what I did, and what happened.

For info - we need the "+3 thing" (i.e. 6 replicas in total) as the datacentres are a long way apart (100+ miles) and have a lower capacity link between them, and I want each centre to be self sufficient.

The new data centre isn't yet on line, so I created the new rings as detailed above - i.e. in one move I upped the replica count from 3 to 6 and also changed the weighting of the previously zero weighted disks on the new centre to be all the same as the ones on the initial datacentre.

I then pushed out these rings to only the new data centres. Yes this would result in some data getting pushed back and forth, but not very much - as the balance is 0.01 or thereabouts.

The 3 servers at the new datacentre, when running a dispersion report gave only a 50% health and couldn't find half the copies - which was expected - but the original centre reported that all was ok.

2 weeks later the new datacentre have filled their replicas and are reporting a health of around 99% (something keep getting moved to the different places).

This morning I replaced the rings on the original datacentre as well and after a bit of disk churning, the whole system is now reporting 6 replicas and 100% dispersion ok.

Job done.

I can now start using the new datacentre, 2 months ahead of schedule. Now if only that would mean I could put my feet up for two months...

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2015-02-26 08:16:16 -0500

Seen: 666 times

Last updated: Mar 13 '15