# swift-container-replicator timeouts - uploaded too many objects

Hi!

I've been doing performance benchmarks on a small Swift cluster (1 proxy, 3 storage nodes). In one of those I've uploaded 6.4M objects into a single container. Afterwards I understood that you're not supposed to do that (because container sharding is not yet implemented?) and keep it to a limit of about 1M objects per container. Now I've deleted the container and all the objects in it, but the errors in my swift-container-replicator log files won't go away, so I haven't completely repaired the situation. I would like the errors to disappear before I continue benchmarking (I'm not sure if my future benchmarks will be representative afterwards).

An example of the error that keeps popping up is added below (the logging from one container replication run). There are always a few errors on every replication run, but it's never tied to the same devices. Restarting the container-replicator processes does not help.

What kind of further 'container cleanup' can I do, besides reformatting all drives and deploying new rings? (I'd like to avoid that).

And another question related to this: during the benchmarks I noticed that writing the 6.4M objects (about 3.7TB in total, in about 3 hours), the write performance would only slightly decrease over time (possibly due to the container DB being abused), but when deleting the objects (at a slightly lower pace, about half the threads of the write benchmark) performance would start to decrease quickly, with huge variations in response time. Can anybody with a lot of knowledge of Swift's internals explain why this happened?

May 18 11:31:04 test-cloud01 container-replicator: Beginning replication run
May 18 11:31:35 test-cloud01 container-replicator: ERROR reading HTTP response from {'index': 0, 'replication_port': 6001, 'weight': 100.0, 'zone': 1, 'ip': '192.168.40.3', 'region ...