[jetty-dev] Clarification on Request Timeouts

classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

[jetty-dev] Clarification on Request Timeouts

Neha Munjal
We are using Jetty client jars (v 9.3.7.v20160115) to make synchronous HTTP/2 communication.

The synchronous send API allows to provide a timeout for a request.
Also, for HTTP/2 communication, 9.3.X opens 1 TCP connection that processes all the queued requests.
Additionally, we have maxRequestsQueuedPerDestination that configures the maximum number of requests that can be queued per end point, after starting to reject them.

ContentResponse response = jettyRequest.timeout(config.reqTimeoutMillis(), TimeUnit.MILLISECONDS).send();


We have a Load Test scenario where in we have a high number of requests being processed in a synchronous manner, with a high number of requests also queued up and we have a timeout specified for the request, as mentioned above. Would like to clarify if the queued up requests will timeout if the server is busy processing other requests and the timeout value elapsed for some of the queued up requests or is the timeout value actually used once the request-response conversation has started? Please confirm.

Thanks
Neha


_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on Request Timeouts

Simone Bordet-3
Hi,

On Wed, May 31, 2017 at 9:02 PM, Neha Munjal <[hidden email]> wrote:
> We are using Jetty client jars (v 9.3.7.v20160115) to make synchronous
> HTTP/2 communication.
>
> The synchronous send API allows to provide a timeout for a request.

Providing a timeout is not only for the blocking API, but also for the
non-blocking APIs.

> Also, for HTTP/2 communication, 9.3.X opens 1 TCP connection that processes
> all the queued requests.
> Additionally, we have maxRequestsQueuedPerDestination that configures the
> maximum number of requests that can be queued per end point, after starting
> to reject them.
>
> ContentResponse response = jettyRequest.timeout(config.reqTimeoutMillis(),
> TimeUnit.MILLISECONDS).send();
>
>
> We have a Load Test scenario where in we have a high number of requests
> being processed in a synchronous manner,

Blocking code in a load test is typically a bad idea.

> with a high number of requests also
> queued up and we have a timeout specified for the request, as mentioned
> above. Would like to clarify if the queued up requests will timeout if the
> server is busy processing other requests and the timeout value elapsed for
> some of the queued up requests or is the timeout value actually used once
> the request-response conversation has started? Please confirm.

The timeout starts when the send() API is called, so in your case
requests may well timeout while they are queued.

Note that if you write a load test and you queue up on the client, you
are not load testing properly.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on Request Timeouts

Neha Munjal
Thanks Simone.

I agree that load test is not the correct scenario here. But the point I wanted to highlight was that we would have bulk number of requests being processed asynchronously by a client object, which also implies that some requests will be queued up, depending upon the bandwidth, latency and processing capacity. Additionally, we can have intermittent synchronous HTTP/2 requests which have a timeout configured to be processed by the same client object. In scenarios when server is busy processing the requests and we have some async requests already queued up, the sync requests would also queue up, and there is a possibility of those timing out while they are queued.

It looks like that we should have separate HttpClient objects for sync and async modes, to avoid requests timing out, or not configure a timeout for synchronous requests, but that beats the idea of expecting a response within a designated time as we cannot have sync requests running forever.

Thanks
Neha

On Wed, May 31, 2017 at 12:43 PM, Simone Bordet <[hidden email]> wrote:
Hi,

On Wed, May 31, 2017 at 9:02 PM, Neha Munjal <[hidden email]> wrote:
> We are using Jetty client jars (v 9.3.7.v20160115) to make synchronous
> HTTP/2 communication.
>
> The synchronous send API allows to provide a timeout for a request.

Providing a timeout is not only for the blocking API, but also for the
non-blocking APIs.

> Also, for HTTP/2 communication, 9.3.X opens 1 TCP connection that processes
> all the queued requests.
> Additionally, we have maxRequestsQueuedPerDestination that configures the
> maximum number of requests that can be queued per end point, after starting
> to reject them.
>
> ContentResponse response = jettyRequest.timeout(config.reqTimeoutMillis(),
> TimeUnit.MILLISECONDS).send();
>
>
> We have a Load Test scenario where in we have a high number of requests
> being processed in a synchronous manner,

Blocking code in a load test is typically a bad idea.

> with a high number of requests also
> queued up and we have a timeout specified for the request, as mentioned
> above. Would like to clarify if the queued up requests will timeout if the
> server is busy processing other requests and the timeout value elapsed for
> some of the queued up requests or is the timeout value actually used once
> the request-response conversation has started? Please confirm.

The timeout starts when the send() API is called, so in your case
requests may well timeout while they are queued.

Note that if you write a load test and you queue up on the client, you
are not load testing properly.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev


_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on Request Timeouts

Simone Bordet-3
Hi,

On Thu, Jun 1, 2017 at 12:15 AM, Neha Munjal <[hidden email]> wrote:

> Thanks Simone.
>
> I agree that load test is not the correct scenario here. But the point I
> wanted to highlight was that we would have bulk number of requests being
> processed asynchronously by a client object, which also implies that some
> requests will be queued up, depending upon the bandwidth, latency and
> processing capacity. Additionally, we can have intermittent synchronous
> HTTP/2 requests which have a timeout configured to be processed by the same
> client object. In scenarios when server is busy processing the requests and
> we have some async requests already queued up, the sync requests would also
> queue up, and there is a possibility of those timing out while they are
> queued.

Yes. You are overloading the client.

Typically you want to apply the backpressure that the server is
applying to HttpClient back to the application that sends the
requests.
This is easily done with a Request.QueuedListener and a
Response.CompleteListener.
In the first you increase a counter, in the latter you decrease the
counter, and you only allow an application to send a request if the
counter is within a certain range.

> It looks like that we should have separate HttpClient objects for sync and
> async modes, to avoid requests timing out,

I don't see how this would solve your issue.

You have a fast sender and a slow receiver. It does not matter if you
use different HttpClient objects, you still have a fast sender and a
slow receiver.
You need to apply backpressure to the application if you don't want
your requests to time out.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on Request Timeouts

Neha Munjal
Hi Simone,

I agree. The way we control this is via a Semaphore on the application side (client side) that has permits equal to the maxRequestsQueuedPerDestination setting.
The send(..) API is only invoked in case this Semaphore has permits available, and any requests exceeding this parameter are blocked on the application side.
Also, we are making use of Response.CompleteListener for async requests which releases the acquired permit once the request response conversation has completed.

For sync requests we do not have any response Listener. We of course make use of the Semaphore to block any requests exceeding this parameter.
But, still, the moment this semaphore gives way to a request and we call the send() API, it implies that the request is queued up. And if the server is really busy processing other requests, there is a possibility that some of these requests, that have a timeout imposed, may timeout.

Our use case sends bulk requests in an asynchronous mode, with which I do not see any issues as there is no timeout associated with these requests. Just that there might be intermittent synchronous requests sent to the same client with a timeout imposed, that may timeout in case the server is really slow in processing the requests.

Thanks
Neha

On Thu, Jun 1, 2017 at 12:04 AM, Simone Bordet <[hidden email]> wrote:
Hi,

On Thu, Jun 1, 2017 at 12:15 AM, Neha Munjal <[hidden email]> wrote:
> Thanks Simone.
>
> I agree that load test is not the correct scenario here. But the point I
> wanted to highlight was that we would have bulk number of requests being
> processed asynchronously by a client object, which also implies that some
> requests will be queued up, depending upon the bandwidth, latency and
> processing capacity. Additionally, we can have intermittent synchronous
> HTTP/2 requests which have a timeout configured to be processed by the same
> client object. In scenarios when server is busy processing the requests and
> we have some async requests already queued up, the sync requests would also
> queue up, and there is a possibility of those timing out while they are
> queued.

Yes. You are overloading the client.

Typically you want to apply the backpressure that the server is
applying to HttpClient back to the application that sends the
requests.
This is easily done with a Request.QueuedListener and a
Response.CompleteListener.
In the first you increase a counter, in the latter you decrease the
counter, and you only allow an application to send a request if the
counter is within a certain range.

> It looks like that we should have separate HttpClient objects for sync and
> async modes, to avoid requests timing out,

I don't see how this would solve your issue.

You have a fast sender and a slow receiver. It does not matter if you
use different HttpClient objects, you still have a fast sender and a
slow receiver.
You need to apply backpressure to the application if you don't want
your requests to time out.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev


_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on Request Timeouts

Simone Bordet-3
Hi,

On Thu, Jun 1, 2017 at 7:32 PM, Neha Munjal <[hidden email]> wrote:
> Hi Simone,
>
> I agree. The way we control this is via a Semaphore on the application side
> (client side) that has permits equal to the maxRequestsQueuedPerDestination
> setting.

But unfortunately this does not apply enough backpressure.
Ideally, you want to queue 0 requests, and always be on the verge of
queuing 1, just to be immediately sent over the network.
Depending on the number of connections you have to the server, you
want always at least 1 request outstanding per connection.
So your semaphore should be set on maxConnectionsPerDestination, and
you acquire a permit on send, and release it on complete.

> The send(..) API is only invoked in case this Semaphore has permits
> available, and any requests exceeding this parameter are blocked on the
> application side.
> Also, we are making use of Response.CompleteListener for async requests
> which releases the acquired permit once the request response conversation
> has completed.
>
> For sync requests we do not have any response Listener. We of course make
> use of the Semaphore to block any requests exceeding this parameter.
> But, still, the moment this semaphore gives way to a request and we call the
> send() API, it implies that the request is queued up. And if the server is
> really busy processing other requests, there is a possibility that some of
> these requests, that have a timeout imposed, may timeout.
>
> Our use case sends bulk requests in an asynchronous mode, with which I do
> not see any issues as there is no timeout associated with these requests.
> Just that there might be intermittent synchronous requests sent to the same
> client with a timeout imposed, that may timeout in case the server is really
> slow in processing the requests.

The fact that you don't impose a timeout on async requests does not
mean that they are sent over the network.
They will, like the sync ones, wait on the queue until there is a
connection available, possibly a long time.
Why you treat these 2 kind of requests (async and sync) differently
with respect to the timeout ?

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on Request Timeouts

Neha Munjal
Hi Simone,

IN our case, since we use Jetty 9.3.x and use the HttpClientTransportOverHTTP2 mechanism to perform HTTP/2 communication. Also in this case, we would open just 1 TCP connection (as the connection pooling support for HTTP/2 is only available in Jetty 9.4.x). Now, Jetty starts rejecting requests in case the number exceeds the maxRequestsQueuedPerDestination. For a use case wherein we send bulk requests in a tight loop, we should never exceed this number and so to avoid that, we set the Semaphore on maxRequestsQueuedPerDestination. Obviously, if the server has high bandwidth and processing at a high rate, we would effectively not be queuing up requests, and processing them as requests are made. Just to pint out, our use case executes around 10000 HTTP/2 requests per second.

Also, the reason why we treat sync and async differently is that for async requests, we just send them and forget. We have application specific listeners hooked to Jetty listeners that invoke the call back functionality once the request is processed. For sync requests, the caller is blocked waiting for a response, and we cannot have the caller wait indefinitely. For this reason, we have imposed a timeout so as to process and return the response within the specified time.

Thanks
Neha

On Thu, Jun 1, 2017 at 12:46 PM, Simone Bordet <[hidden email]> wrote:
Hi,

On Thu, Jun 1, 2017 at 7:32 PM, Neha Munjal <[hidden email]> wrote:
> Hi Simone,
>
> I agree. The way we control this is via a Semaphore on the application side
> (client side) that has permits equal to the maxRequestsQueuedPerDestination
> setting.

But unfortunately this does not apply enough backpressure.
Ideally, you want to queue 0 requests, and always be on the verge of
queuing 1, just to be immediately sent over the network.
Depending on the number of connections you have to the server, you
want always at least 1 request outstanding per connection.
So your semaphore should be set on maxConnectionsPerDestination, and
you acquire a permit on send, and release it on complete.

> The send(..) API is only invoked in case this Semaphore has permits
> available, and any requests exceeding this parameter are blocked on the
> application side.
> Also, we are making use of Response.CompleteListener for async requests
> which releases the acquired permit once the request response conversation
> has completed.
>
> For sync requests we do not have any response Listener. We of course make
> use of the Semaphore to block any requests exceeding this parameter.
> But, still, the moment this semaphore gives way to a request and we call the
> send() API, it implies that the request is queued up. And if the server is
> really busy processing other requests, there is a possibility that some of
> these requests, that have a timeout imposed, may timeout.
>
> Our use case sends bulk requests in an asynchronous mode, with which I do
> not see any issues as there is no timeout associated with these requests.
> Just that there might be intermittent synchronous requests sent to the same
> client with a timeout imposed, that may timeout in case the server is really
> slow in processing the requests.

The fact that you don't impose a timeout on async requests does not
mean that they are sent over the network.
They will, like the sync ones, wait on the queue until there is a
connection available, possibly a long time.
Why you treat these 2 kind of requests (async and sync) differently
with respect to the timeout ?

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev


_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev