Revision history [back]

click to hide/show revision 1
initial version

Thank you very much clay for your informative and explanatory answer.

I tried all the ways you told and finllay able to list the containers and data with curl having no auth system in pipeline.. :)

This is the output of Swift-Ring-Builder verification, which you asked for: $ swift-ring-builder account.builder account.builder, build version 2 262144 partitions, 1 replicas, 2 zones, 2 devices, 0.00 balance The minimum number of hours before a partition can be reassigned is 1 Devices: id zone ip address port name weight partitions balance meta 0 1 10.180.32.20 6012 sdb1 100.00 137971 0.00 1 2 10.180.32.20 6022 sdb2 90.00 124173 -0.00

$ swift-ring-builder container.builder container.builder, build version 2 262144 partitions, 1 replicas, 2 zones, 2 devices, 0.00 balance The minimum number of hours before a partition can be reassigned is 1 Devices: id zone ip address port name weight partitions balance meta 0 1 10.180.32.20 6011 sdb1 100.00 137971 0.00 1 2 10.180.32.20 6021 sdb2 90.00 124173 -0.00

$ swift-ring-builder object.builder object.builder, build version 2 262144 partitions, 1 replicas, 2 zones, 2 devices, 0.00 balance The minimum number of hours before a partition can be reassigned is 1 Devices: id zone ip address port name weight partitions balance meta 0 1 10.180.32.20 6010 sdb1 100.00 137971 0.00 1 2 10.180.32.20 6020 sdb2 90.00 124173 -0.00

As you said in your previous answer that there could be some new node also, so try to find that. So I got the following output on 10.180.32.22 which I added previously as a new storage node: $ find /srv/node/ -name *.db /srv/node/sdc1/containers/209408/5e8/cc800b2a07d30c3776dedf4f3edc55e8/cc800b2a07d30c3776dedf4f3edc55e8.db /srv/node/sdc1/containers/231321/a01/e1e67042d7b7390aee99d743c7139a01/e1e67042d7b7390aee99d743c7139a01.db /srv/node/sdc1/containers/203107/716/c658c9213ce5e47e384bb643f4939716/c658c9213ce5e47e384bb643f4939716.db /srv/node/sdc1/containers/222609/31a/d9644d9126a48054b763b189cf03f31a/d9644d9126a48054b763b189cf03f31a.db /srv/node/sdc1/containers/183616/ebf/b35003a21a3fa14be615f0ce8a994ebf/b35003a21a3fa14be615f0ce8a994ebf.db /srv/node/sdc1/containers/223947/520/dab2ee343d5721cb564cee9b769e1520/dab2ee343d5721cb564cee9b769e1520.db /srv/node/sdc1/containers/235872/d22/e6583fa257ed916e3e676a002c084d22/e6583fa257ed916e3e676a002c084d22.db /srv/node/sdc1/containers/225846/1f6/dc8da88182acc2aa3abeee493b1171f6/dc8da88182acc2aa3abeee493b1171f6.db /srv/node/sdc1/containers/231948/01a/e28323de7b4938763e7455338f1e401a/e28323de7b4938763e7455338f1e401a.db /srv/node/sdc1/containers/231068/008/e1a7275ad16bb6553fb50089fe0ac008/e1a7275ad16bb6553fb50089fe0ac008.db /srv/node/sdc1/containers/188038/16d/b7a19cb6844e3c001d55076faf3d516d/b7a19cb6844e3c001d55076faf3d516d.db /srv/node/sdc1/containers/245841/175/f0145712b7b24186b580e26e34294175/f0145712b7b24186b580e26e34294175.db /srv/node/sdc1/containers/247277/6c7/f17b4f811f1ae54fe43330dbddb8f6c7/f17b4f811f1ae54fe43330dbddb8f6c7.db /srv/node/sdb1/containers/254796/d98/f8d33903bae5cfab5ba4fa2bc1456d98/f8d33903bae5cfab5ba4fa2bc1456d98.db /srv/node/sdb1/containers/198130/235/c17cab7ae29eb87da50fa9559b472235/c17cab7ae29eb87da50fa9559b472235.db /srv/node/sdb1/containers/209854/e82/ccefb19244f20fe741ce3c3ae0051e82/ccefb19244f20fe741ce3c3ae0051e82.db /srv/node/sdb1/containers/220045/205/d6e344ddb343ce017ca789a4b9bd1205/d6e344ddb343ce017ca789a4b9bd1205.db /srv/node/sdb1/containers/231503/7aa/e213dc13c8170cdfb2d1dea6cfc177aa/e213dc13c8170cdfb2d1dea6cfc177aa.db /srv/node/sdb1/containers/225629/eef/dc57716afcec9c7f828399999ba40eef/dc57716afcec9c7f828399999ba40eef.db /srv/node/sdb1/containers/203088/235/c65425050db47f7e3e21038c48794235/c65425050db47f7e3e21038c48794235.db /srv/node/sdb1/containers/236697/86b/e7264fb2cd53361525a36d3f1d65a86b/e7264fb2cd53361525a36d3f1d65a86b.db /srv/node/sdb1/containers/235301/709/e5c957fa7fd0e679801d92adde157709/e5c957fa7fd0e679801d92adde157709.db /srv/node/sdb1/containers/209933/2d7/cd0359cea5442c85f6cc90586db962d7/cd0359cea5442c85f6cc90586db962d7.db /srv/node/sdb1/containers/236620/f24/e7131ad2a2bf8d01a98174ad595f3f24/e7131ad2a2bf8d01a98174ad595f3f24.db /srv/node/sdb1/containers/205210/aab/c866add742d8b7fecc99869ea88deaab/c866add742d8b7fecc99869ea88deaab.db

so, Do I need to transfer all these container related DB files back from 10.180.32.22 to 10.180.32.20 (old server) ? I so, then How can i do that, because that node is not in ring now?

Thanks clay that you helped me a lot But still I have some more questions in my mind to proceed with repair work of my cluster, as follows:

  1. As I was having a web application running as a front-end for swift cluster and there are some existing user so I have a plan that I will download all data using curl command for respective user and will have a fresh multinode installation then then will create existing user and will re-upload the data. So is it the right way to proceed with or you can suggest me any other approach ?

  2. I have 3 servers now ( 4x3TB + 4x3TB + 1x2TB ) and planning for multinode swift production cluster. Can you suggest me the number of zone, partition power and replication count for the same ?

  3. Can you tell me how to decide the partition power and replication count according to the available storage space ?

  4. Can we increase or decrease the partition power of an existing swift-cluster ring files, having data in the cluster ? if so, then is it going to make replica of new upcoming data only or will it create replica for existing data ?

  5. What kind of challanges I can end up in future with this kind of multinode production cluster so I can design for a scalable cluster ?

You might find it irritaing answering so many questions but I need help, to procced with making my cluster in healthy running state !!

Regards.