Ask Your Question
0

Connection reuse on proxy server

asked 2012-01-13 22:44:03 -0500

knawale gravatar image

On my cluster I see several connections on the proxy server in the TIME_WAIT state (the connections that were used to connect with the object server). Seems like for every get request coming in to the proxy a new connection is created towards the object server. And in less than 5 minutes the proxy server runs out of ports and a EADDRNOTAVAIL error is seen while attempting a connection towards the object server. Do I have to enable anything somewhere so that the proxy server uses persistent connections or is persistent connections not supported. I did a wireshark capture on the proxy server and once a 200 OK is received from object store, the proxy server sends a FIN message to object server and the connection is closed. Thanks -kunal

edit retag flag offensive close merge delete

3 answers

Sort by ยป oldest newest most voted
0

answered 2012-01-25 20:37:40 -0500

jguermonprez gravatar image

Hi Kunal,

Did you follow the tuning guide for swift ? http://swift.openstack.org/deployment_guide.html#general-system-tuning (http://swift.openstack.org/deployment...)

disable TIME_WAIT.. wait..

net.ipv4.tcp_tw_recycle=1 net.ipv4.tcp_tw_reuse=1

disable syn cookies

net.ipv4.tcp_syncookies = 0

double amount of allowed conntrack

net.ipv4.netfilter.ip_conntrack_max = 262144

edit flag offensive delete link more
0

answered 2012-01-25 21:00:45 -0500

knawale gravatar image

Yes setting the tw_recycle and tw_reuse flags to 1 solves the problem. However I think it is not a safe thing to do from tcp protocol point of view. When a tcp port is re-used immediately after it has been closed then there is a chance that the other side might have not received the last tcp packet (lost packets) and would still keep the port open. When a new tcp connection is open via that port then the other side might wrongly assume the new packet arriving as belonging to the old connection and cause problems.

Also the above flags are set globally for all interfaces, they cannot be set on per interface basis. So even though the problem is only happening on the intra cluster network interface, the public facing interfaces will also get the above setting. (and the public facing interface is more likely to experience packet loss etc..)

edit flag offensive delete link more
0

answered 2012-01-25 21:32:36 -0500

jguermonprez gravatar image

I'm not a tcp expert, but as this configuration seems to be used at rackspace, i suppose it doesn't cause much problems in a swift environnement, even under high load. If you think it's a bad idea to use use those parameters, perhaps you should convert your question into a bug ?

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Get to know Ask OpenStack

Resources for moderators

Question Tools

1 follower

Stats

Asked: 2012-01-13 22:44:03 -0500

Seen: 158 times

Last updated: Jan 25 '12