* Open TCP connection to the server (send sync, recieve syn/ack , send ack = 1 round trip)
* Send/Recieve WS xml (send at least one packaget , recieve at least one back = at least 1 round trip)
Let`s eliminate the TCP connection time. My ping to the server is 222 ms. The server calculation time is small (less than 15ms). So one round-trip = ~235.
The goal is to reduce each WS call to one round-trip.
1. Use high HTTP1.1 keepAlive value:
Pro : reduce the tcp-connection round-trip
Cons: Load-balancing based on IP will not function as intended , as after first connection the client will keep using the same host over and over.
- Make sure HTTP1.1 keepAlive is turned on on both client and server.
Tomcat HTTP1.1 keepAlive is turned on by default ( you can shut it off in server.xml , Connector section , by adding maxKeepAliveRequests="1". [default is no parameter at all and means 100 requests )
Turning it off will cause 2 round-trips - [first=626] then 457, 455 , 458 , ... - Keep the connection alive by using it more frequently than the timeout.
The defaultkeepAliveTimeout
in Tomcat 6.0.18 is equals toconnectionTimeout
which is typically 20000ms=20 seconds.
It means that the TCP connection will stay alive only if it is used more frequently than this.
You can configure the tomcat to higher value (120seconds for example) in the server.xml, Connector section.
Note: adding the parameterkeepAliveTimeout
="500000"
alone is not enough for tomcat6.0.18 ,cause of this bug. you will need to adddisableUploadTimeout="false"
keepAliveTimeout
="500000"
2. If your data is more than few bytes, compress it to save transfer time. You need to configure both the server and the client to support compression.
- Client - GZIP options
options.setProperty(HTTPConstants.MC_ACCEPT_GZIP ,true);
[note A. if you configure HTTPConstants.MC_GZIP_REQUEST , the request itself will be compressed . will work fine for a server which support gzip. will fail on other-servers.
If you requests are not huge , not worth the risk! ]
[note B. this option can cause you problems if you have LB which is problematic with chunking :options.setProperty(HTTPConstants.CHUNKED,Boolean.TRUE ) ] - Sever: on apache6.0.18 configure on server.xml, Connector . Add: compression="on" , which means try to use compression depending on the client.
Firefox 3.6.6 by default will not get it compressed, nor a WS client without GZIP options, but our optimized client will use it. - Note: this configuration should work with/without compression on both the client and the server side and should be "backward compatible". Transfer time can be considerably reduced (in my case 500ms on 6KB to 260ms on 1KB)
- TCP implementation increase the buffer size , sending more and more bytes per second , as long as the connection is not congested , so after a second or two , you will reach the optimal size. But , on high-latency, high-bandwidth connections (100MB/s , 50ms latency) the OS max tcp-buff limit the max throughput and can reduce by a factor of 2-10.
A good max value should be: round-trip[ping]* bandwidth .
If this is your use case (like connection between data-centers) , tune the system.
example = 100Mb/s = 12.5MB/s . ping 100ms --> 12.5MB/s*0.1s= 1.25MB. the default on UNIX is 256KB , so you can get a boost of x5. - If you are not using a persistent TCP connection , and the round-trip is large , for small files it will take few round-trips to get to optimal speed (and by then the file already finished). If this is your case, set the Socket receive/send buffer to high value immediately.
- For low-latency , small-transmittion , also use Socket.setTcpNoDelay which disable Nagle`s algoritm. Wikipedia quote:
Nagle's algorithm works by combining a number of small outgoing messages, and sending them all at once. Specifically, as long as there is a sent packet for which the sender has received no acknowledgment, the sender should keep buffering its output until it has a full packet's worth of output, so that output can be sent all at once.
P.S.
1. The default Axis2 RPC Client do not use a pool of HTTP connectors. This default implementation is good only for tests and will cause problems in productions , follow the instruction here and use MultiThreadedHttpConnectionManager.
2. To see TCP connection do (linux) "netstat -nap | grep