Ask Your Question

tburke's profile - activity

2019-01-03 15:01:43 -0500 answered a question S3: method POST on bucket with query delete doesn't work

This might be related to -- you might want to try enabling force_swift_request_proxy_log or just grepping your object-server logs for the DeleteMultiple transaction ID (tx5e61bcccea464480b4cc7-005b184a52). If I'm right about the bug, I'd expect a bunch of HEADs that all 404, so swift3/s3api never follows through with the DELETE.

Another couple things worth looking at: how are your object-updaters doing? Are there a bunch of container updates piling up, so your listings are out of date? How are your replicators and reconstructors doing? Have rebalances moved faster than they should, so data on disk is misplaced?

Does recursive works with del for multi-part uploaded objects?

Yes, it should. That's the purpose of the HEADs -- to figure out whether the object is a multipart upload (in which case the [Swift] DELETE request should include a ?multipart-manifest=delete query param) or a regular object.

2017-09-01 10:32:30 -0500 received badge  Necromancer (source)
2017-09-01 10:32:30 -0500 received badge  Teacher (source)
2017-07-05 12:26:35 -0500 answered a question Swift3/S3 API errors when authenticating with EC2 keys

Unfortunately, Ocata Keystone doesn't support the s3tokens endpoint. This was broken with the removal of issue_v2_token in (openstack/keystone@dd1e705), but we hadn't noticed until after Ocata was cut. This was fixed in (openstack/keystone@3ec1aa4) (so, will be fixed in Pike) by switching to support Keystone v3 tokens. Given how rarely those modules are touched, it should be fairly easy to backport (either in your own fork or as a stable patch upstream). Note that swift3 will need to be able to make sense of the different response format -- that work was done in (openstack/swift3@807ed38); I should work with Kota to tag a release.

That all has to do with the traceback for v2 signatures -- the v4 failure likely has to do with a difference in how the canonical request (which gets signed using the secret) was constructed on the client and server. I'd be interested in seeing debugging output from the client to look for bugs in what swift3's doing, but even if we got that sorted out, we'd hit a similar traceback when keystone tries to send back a 200 OK.

2017-03-23 23:44:59 -0500 answered a question how to config swift to support s3 api


Signal proxy-server pid: 9339 signal: 15
No proxy-server running


Could not bind to after trying for 30 seconds

I'd guess that there's already a proxy-server running, only its pid was not the expected 9339. Since the proxy never restarted, it didn't pick up the proxy-server.conf changes to enable swift3, leading to the request failure.

I suggest running swift-oldies -a 1 (or ps aux | grep proxy) to find the proxy-server process, killing it, then running swift-init start proxy-server to start it again.