[jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

[jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Neha Munjal
Hi,

We are using Jetty client jars (v 9.3.7.v20160115) to make both synchronous and asynchronous HTTP/2 requests.

1. It seems that the synchronous request model internally makes use of asynchronous model i.e requests are queued and processed. Every request can be provided or configured with a timeout, that would indicate the timeout for request/response conversation to complete.
I would like to clarify that in case we need to retry synchronous requests depending upon the response received from the end point, the new request to be retried once invoked, would again be placed in the queue and processed. I guess that would create ordering problems as there can be many other requests that could have come in between the original request and its retry request, which will be given preference.

2. For asynchronous requests, is there a way to detect connection failures early or in a synchronous fashion? Say we have 100 requests sent at the same time, but all pointing to a bad host name or to a host that is temporarily down and not responding, is there a way to detect these connection failures in a synchronous fashion?

Thanks
Neha

_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Simone Bordet-3
Hi,

On Thu, May 11, 2017 at 10:41 PM, Neha Munjal <[hidden email]> wrote:
> Hi,
>
> We are using Jetty client jars (v 9.3.7.v20160115) to make both synchronous
> and asynchronous HTTP/2 requests.

Please upgrade, we are now at 9.3.19 and 9.4.5.

> 1. It seems that the synchronous request model internally makes use of
> asynchronous model i.e requests are queued and processed. Every request can
> be provided or configured with a timeout, that would indicate the timeout
> for request/response conversation to complete.
> I would like to clarify that in case we need to retry synchronous requests
> depending upon the response received from the end point, the new request to
> be retried once invoked, would again be placed in the queue and processed. I
> guess that would create ordering problems as there can be many other
> requests that could have come in between the original request and its retry
> request, which will be given preference.

There is no automatic retry. If your application retries a request,
then it will be no different than any other request.

> 2. For asynchronous requests, is there a way to detect connection failures
> early or in a synchronous fashion? Say we have 100 requests sent at the same
> time, but all pointing to a bad host name or to a host that is temporarily
> down and not responding, is there a way to detect these connection failures
> in a synchronous fashion?

Connects can be made blocking by configuring:

HttpClient.setSocketAddressResolver(new SocketAddressResolver.Sync());

and

HttpClient.setConnectBlocking(true);

Note that this is not fool proof though.
You can have 10 request be able to connection, and the 11th cannot.

I don't think it's a good idea to rely on detection of synchronous connects.
Your application should be prepared to handle failures anyway, and
connect failures are just one among many and as such should not be
treated specially.

A more reliable way would be to send a first request, wait for it to
come back, and then send the other 99.
These can still fail, but at least you have not configured the client
with configuration that gives you false confidence.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Neha Munjal
Hi Simone,

Thanks for the clarifications.

Regarding retries, that was the point I was making as well. If our application retries, it will be treated no different than other request and will be queued and processed as other requests.

Thanks for clarifying the blocking connect. I have a further clarification in this regard for synchronous requests. Here, for every request, we can provide a timeout that governs the time for request/response conversation to complete.
Now, the request/response processing involves making a connection with the target end point, sending the request and receiving the response from the target end point. While working with other libraries eg. Apache, we make use of 2 properties: connectionTimeout (timeout for establishing connections) and socketTimeout (timeout to read the response). Using the same terminology, how should the request timeout value be configured for synchronous Jetty requests.
Should it be connectionTimeout + socketTimeout or atleast a value that is greater that connectionTimeout so that we sufficienty allocate time for connection creation and reading the response. 

Thanks
Neha




On Thu, May 11, 2017 at 2:45 PM, Simone Bordet <[hidden email]> wrote:
Hi,

On Thu, May 11, 2017 at 10:41 PM, Neha Munjal <[hidden email]> wrote:
> Hi,
>
> We are using Jetty client jars (v 9.3.7.v20160115) to make both synchronous
> and asynchronous HTTP/2 requests.

Please upgrade, we are now at 9.3.19 and 9.4.5.

> 1. It seems that the synchronous request model internally makes use of
> asynchronous model i.e requests are queued and processed. Every request can
> be provided or configured with a timeout, that would indicate the timeout
> for request/response conversation to complete.
> I would like to clarify that in case we need to retry synchronous requests
> depending upon the response received from the end point, the new request to
> be retried once invoked, would again be placed in the queue and processed. I
> guess that would create ordering problems as there can be many other
> requests that could have come in between the original request and its retry
> request, which will be given preference.

There is no automatic retry. If your application retries a request,
then it will be no different than any other request.

> 2. For asynchronous requests, is there a way to detect connection failures
> early or in a synchronous fashion? Say we have 100 requests sent at the same
> time, but all pointing to a bad host name or to a host that is temporarily
> down and not responding, is there a way to detect these connection failures
> in a synchronous fashion?

Connects can be made blocking by configuring:

HttpClient.setSocketAddressResolver(new SocketAddressResolver.Sync());

and

HttpClient.setConnectBlocking(true);

Note that this is not fool proof though.
You can have 10 request be able to connection, and the 11th cannot.

I don't think it's a good idea to rely on detection of synchronous connects.
Your application should be prepared to handle failures anyway, and
connect failures are just one among many and as such should not be
treated specially.

A more reliable way would be to send a first request, wait for it to
come back, and then send the other 99.
These can still fail, but at least you have not configured the client
with configuration that gives you false confidence.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev


_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Simone Bordet-3
Hi,

On Fri, May 12, 2017 at 12:32 AM, Neha Munjal <[hidden email]> wrote:

> Hi Simone,
>
> Thanks for the clarifications.
>
> Regarding retries, that was the point I was making as well. If our
> application retries, it will be treated no different than other request and
> will be queued and processed as other requests.
>
> Thanks for clarifying the blocking connect. I have a further clarification
> in this regard for synchronous requests. Here, for every request, we can
> provide a timeout that governs the time for request/response conversation to
> complete.
> Now, the request/response processing involves making a connection with the
> target end point,

That is not always true.
HttpClient pools connections, so you only need to establish a
connection is there is not one already available.

> sending the request and receiving the response from the
> target end point. While working with other libraries eg. Apache, we make use
> of 2 properties: connectionTimeout (timeout for establishing connections)
> and socketTimeout (timeout to read the response). Using the same
> terminology, how should the request timeout value be configured for
> synchronous Jetty requests.
> Should it be connectionTimeout + socketTimeout or atleast a value that is
> greater that connectionTimeout so that we sufficienty allocate time for
> connection creation and reading the response.

HttpClient defines 4 timeouts:

* addressResolutionTimeout - to resolve the target host via DNS
* connectTimeout - to establish TCP connections
* idleTimeout - the time a connection can stay idle before being closed
* timeout - the time within a request+response cycle must complete

The last one is set on the Request object.

I guess Jetty's idleTimeout corresponds to Apache's socketTimeout.

Request.timeout is there to cap the amount of time a request+response takes.
On a very slow client or server with a very large upload or download,
you may never hit the idleTimeout (even if you do pause due to network
congestion), but you still want to cap the total time the
request+response cycle takes.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Neha Munjal
Thanks Simone. That clarifies my query.
Additionally, we have a requirement to do HTTP/2 communication with the target end point.
We are creating a high level Jetty HttpClient that makes use of HttpClientTransportOverHTTP2 transport mechanism as follows:

httpClient = new HttpClient(new HttpClientTransportOverHTTP2(http2Client),
                    sslContextFactory);

We are setting the following 2 properties on the httpClient object:

MaxConnectionsPerDestination : 50
MaxRequestsQueuedPerDestination: 5000

Would like to clarify that if we use the the above mechanism, would we actually maintain a pool of 50 connections and leverage the underlying connection pool mechanism.
Do we have any logs that we can enable to see that in case of high load of requests, we are fully leveraging the connection pool.

Thanks
Neha 

On Thu, May 11, 2017 at 11:22 PM, Simone Bordet <[hidden email]> wrote:
Hi,

On Fri, May 12, 2017 at 12:32 AM, Neha Munjal <[hidden email]> wrote:
> Hi Simone,
>
> Thanks for the clarifications.
>
> Regarding retries, that was the point I was making as well. If our
> application retries, it will be treated no different than other request and
> will be queued and processed as other requests.
>
> Thanks for clarifying the blocking connect. I have a further clarification
> in this regard for synchronous requests. Here, for every request, we can
> provide a timeout that governs the time for request/response conversation to
> complete.
> Now, the request/response processing involves making a connection with the
> target end point,

That is not always true.
HttpClient pools connections, so you only need to establish a
connection is there is not one already available.

> sending the request and receiving the response from the
> target end point. While working with other libraries eg. Apache, we make use
> of 2 properties: connectionTimeout (timeout for establishing connections)
> and socketTimeout (timeout to read the response). Using the same
> terminology, how should the request timeout value be configured for
> synchronous Jetty requests.
> Should it be connectionTimeout + socketTimeout or atleast a value that is
> greater that connectionTimeout so that we sufficienty allocate time for
> connection creation and reading the response.

HttpClient defines 4 timeouts:

* addressResolutionTimeout - to resolve the target host via DNS
* connectTimeout - to establish TCP connections
* idleTimeout - the time a connection can stay idle before being closed
* timeout - the time within a request+response cycle must complete

The last one is set on the Request object.

I guess Jetty's idleTimeout corresponds to Apache's socketTimeout.

Request.timeout is there to cap the amount of time a request+response takes.
On a very slow client or server with a very large upload or download,
you may never hit the idleTimeout (even if you do pause due to network
congestion), but you still want to cap the total time the
request+response cycle takes.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev


_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Simone Bordet-3
Hi,

On Fri, May 12, 2017 at 7:51 PM, Neha Munjal <[hidden email]> wrote:

> Thanks Simone. That clarifies my query.
> Additionally, we have a requirement to do HTTP/2 communication with the
> target end point.
> We are creating a high level Jetty HttpClient that makes use of
> HttpClientTransportOverHTTP2 transport mechanism as follows:
>
> httpClient = new HttpClient(new HttpClientTransportOverHTTP2(http2Client),
>                     sslContextFactory);
>
> We are setting the following 2 properties on the httpClient object:
>
> MaxConnectionsPerDestination : 50
> MaxRequestsQueuedPerDestination: 5000
>
> Would like to clarify that if we use the the above mechanism, would we
> actually maintain a pool of 50 connections and leverage the underlying
> connection pool mechanism.

That is correct with Jetty 9.4.x. Earlier versions did not have a
connection pool for HTTP/2.

> Do we have any logs that we can enable to see that in case of high load of
> requests, we are fully leveraging the connection pool.

There are few that you may be interested in.
Enable DEBUG logging for "org.eclipse.jetty.client", and then retain
only those you need.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Neha Munjal
Thanks Simone. This means that we should upgrade to Jetty v 9.4.x in order to leverage many efficient features.

Additionally, another thing that I would like to clarify with 9.3.x is regarding proxy usage.
We create a high level HTTPClient that uses the HttpClientTransportOverHTTP2 and makes use of the lower level Http2Client to execute HTTP/2 requests.
Now, we set the proxy configuration on the high level HTTP Client object.

1. Is it OK to set a HTTP/1 forward proxy server to act as a tunnel for HTTP/2 requests?
2. Also it seems that we cannot tunnel HTTPS requests using HTTP/1 proxy server to HTTP/2 server. 
3. In my unit tests, I set up a simple HTTP/1 server as proxy server and a HTTP/2 server as the target end point. Any requests sent directly to my local HTTP/2 server complete successfully. However, when I try to tunnel the same requests using the local proxy server, I end up getting the following exception:

Caused by: java.nio.channels.ClosedChannelException
at org.eclipse.jetty.http2.HTTP2Session.onShutdown(HTTP2Session.java:818)
at org.eclipse.jetty.http2.HTTP2Connection$HTTP2Producer.produce(HTTP2Connection.java:191)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:162)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:101)
at org.eclipse.jetty.http2.HTTP2Connection.onOpen(HTTP2Connection.java:81)
at org.eclipse.jetty.http2.client.HTTP2ClientConnectionFactory$HTTP2ClientConnection.onOpen(HTTP2ClientConnectionFactory.java:148)
at org.eclipse.jetty.io.SelectorManager.connectionOpened(SelectorManager.java:307)
at org.eclipse.jetty.io.ManagedSelector.createEndPoint(ManagedSelector.java:414)
at org.eclipse.jetty.io.ManagedSelector.access$1600(ManagedSelector.java:56)
at org.eclipse.jetty.io.ManagedSelector$CreateEndPoint.run(ManagedSelector.java:587)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:213)
at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.execute(ExecuteProduceConsume.java:101)
at org.eclipse.jetty.io.ManagedSelector.run(ManagedSelector.java:136)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)

Not sure if I am missing something here, but any hints/inputs would help me in going further.

Thanks
Neha

On Mon, May 15, 2017 at 1:26 AM, Simone Bordet <[hidden email]> wrote:
Hi,

On Fri, May 12, 2017 at 7:51 PM, Neha Munjal <[hidden email]> wrote:
> Thanks Simone. That clarifies my query.
> Additionally, we have a requirement to do HTTP/2 communication with the
> target end point.
> We are creating a high level Jetty HttpClient that makes use of
> HttpClientTransportOverHTTP2 transport mechanism as follows:
>
> httpClient = new HttpClient(new HttpClientTransportOverHTTP2(http2Client),
>                     sslContextFactory);
>
> We are setting the following 2 properties on the httpClient object:
>
> MaxConnectionsPerDestination : 50
> MaxRequestsQueuedPerDestination: 5000
>
> Would like to clarify that if we use the the above mechanism, would we
> actually maintain a pool of 50 connections and leverage the underlying
> connection pool mechanism.

That is correct with Jetty 9.4.x. Earlier versions did not have a
connection pool for HTTP/2.

> Do we have any logs that we can enable to see that in case of high load of
> requests, we are fully leveraging the connection pool.

There are few that you may be interested in.
Enable DEBUG logging for "org.eclipse.jetty.client", and then retain
only those you need.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev


_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Simone Bordet-3
Hi,

On Tue, May 16, 2017 at 12:43 AM, Neha Munjal <[hidden email]> wrote:

> Thanks Simone. This means that we should upgrade to Jetty v 9.4.x in order
> to leverage many efficient features.
>
> Additionally, another thing that I would like to clarify with 9.3.x is
> regarding proxy usage.
> We create a high level HTTPClient that uses the HttpClientTransportOverHTTP2
> and makes use of the lower level Http2Client to execute HTTP/2 requests.
> Now, we set the proxy configuration on the high level HTTP Client object.
>
> 1. Is it OK to set a HTTP/1 forward proxy server to act as a tunnel for
> HTTP/2 requests?

No.

CONNECT implementation for HTTP/2 is particularly complex and has not
been implemented yet, see
https://github.com/eclipse/jetty.project/issues/250.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Neha Munjal
Thanks Simone. That means we need to use a HTTP/2 server as a forward proxy server for HTTP/2 communication.
Also wanted to confirm if it would be fine to set this proxy server to the high level HTTP Client object that uses lower level Http2Client to perform HTTP/2 communication, as HttpClient API provides APIs to set the proxy configuration.

Thanks
Neha

On Tue, May 16, 2017 at 12:34 AM, Simone Bordet <[hidden email]> wrote:
Hi,

On Tue, May 16, 2017 at 12:43 AM, Neha Munjal <[hidden email]> wrote:
> Thanks Simone. This means that we should upgrade to Jetty v 9.4.x in order
> to leverage many efficient features.
>
> Additionally, another thing that I would like to clarify with 9.3.x is
> regarding proxy usage.
> We create a high level HTTPClient that uses the HttpClientTransportOverHTTP2
> and makes use of the lower level Http2Client to execute HTTP/2 requests.
> Now, we set the proxy configuration on the high level HTTP Client object.
>
> 1. Is it OK to set a HTTP/1 forward proxy server to act as a tunnel for
> HTTP/2 requests?

No.

CONNECT implementation for HTTP/2 is particularly complex and has not
been implemented yet, see
https://github.com/eclipse/jetty.project/issues/250.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev


_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev
Reply | Threaded
Open this post in threaded view
|

Re: [jetty-dev] Clarification on synchronous and asynchronous request model for Jetty

Simone Bordet-3
Hi,

On Thu, May 18, 2017 at 2:03 AM, Neha Munjal <[hidden email]> wrote:
> Thanks Simone. That means we need to use a HTTP/2 server as a forward proxy
> server for HTTP/2 communication.
> Also wanted to confirm if it would be fine to set this proxy server to the
> high level HTTP Client object that uses lower level Http2Client to perform
> HTTP/2 communication, as HttpClient API provides APIs to set the proxy
> configuration.

What I'm saying is that you cannot use HttpClient with the HTTP/2
transport and set a proxy, and then trying to hit "https" URIs because
CONNECT over HTTP/2 has not been implemented yet.

On the other hand, it is fine using a HTTP/2 server with a client to
implement a reverse proxy.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-dev mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-dev