I'm suspicious of XFS:
On system failures, the XFS file system can sometimes truncate files it’s trying to write and produce zero-byte files. The object-auditor will catch these problems but in the case of a system crash it would be advisable to run an extra, less rate limited sweep to check for these specific files.
(source the info: https://docs.openstack.org/swift/2.7.1/admin_guide.html (https://docs.openstack.org/swift/2.7....))
[root@xpto-object-storage ~]# file /srv/node/sde/quarantined/objects/86611a8eee5c285a433294f8f57086af-5619c2917eca4631b60840f148030522/1545166876.47404.data
/srv/node/sde/quarantined/objects/86611a8eee5c285a433294f8f57086af-5619c2917eca4631b60840f148030522/1545166876.47404.data: empty
But, in some times, the file is ok. In the test it was possible to compare a quarantine file with a file available via s3. This comparison was successful and the files have the same hash md5 and you can also view the contents of the two (PDF).
Important that many messages, if no all, log that hast the file is diferente the hash de folder/location.
Similar problem: Somebody damaged their Swift config and paid for it with mass-quarantines.
The recommendation in that case was re-uploading objects rather than trying to repair the damage.
Thanks, how i can free space im my device?
It seems quarantined objects are under /srv/node/d1/quarantined, at least in the Kilo version, so perhaps you should start there. Or maybe this is not the right solution; it just thought it was similar enough.
how can to do analyze/troubleshooting?
Was solved. Host 3 was removed from the cluster and after some time the quarantine messages were gone. So we made a link to the beginning of the problem that started with the input from this host, even adjusting the settings did not work. Removing from the cluster, the problem has stopped. Thanks