Changes in Jetty socket close behavior from 9.2 -> 9.4

classic Classic list List threaded Threaded
19 messages Options
Reply | Threaded
Open this post in threaded view
|

Changes in Jetty socket close behavior from 9.2 -> 9.4

Tommy Becker
Hi folks,
We recently upgraded our application from Jetty 9.2 to 9.4. After doing so, we've noticed a lot of Early EOF stacktraces happening on POST requests. Of note, our application is fronted by a Varnish cache, which holds connections to the Jetty backend open indefinitely, even though we have the Jetty idle timeout configured to 50s. We're aware this is sub optimal, but it has been this way for some time, and worked fine under Jetty 9.2. To be clear, we do not have any evidence, or even believe that Jetty is doing anything wrong here per-se. But I understand that a fair bit of work happened in Jetty in this area (timeout/socket close) between 9.2 and 9.4 and I'm trying to get a high level description of what changed in an attempt to explain the behavior we're seeing now. Any info you could provide would be appreciated!


_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Simone Bordet-3
Hi,

On Wed, Sep 19, 2018 at 4:06 PM Tommy Becker <[hidden email]> wrote:
>
> Hi folks,
> We recently upgraded our application from Jetty 9.2 to 9.4. After doing so, we've noticed a lot of Early EOF stacktraces happening on POST requests. Of note, our application is fronted by a Varnish cache, which holds connections to the Jetty backend open indefinitely, even though we have the Jetty idle timeout configured to 50s. We're aware this is sub optimal, but it has been this way for some time, and worked fine under Jetty 9.2. To be clear, we do not have any evidence, or even believe that Jetty is doing anything wrong here per-se. But I understand that a fair bit of work happened in Jetty in this area (timeout/socket close) between 9.2 and 9.4 and I'm trying to get a high level description of what changed in an attempt to explain the behavior we're seeing now. Any info you could provide would be appreciated!
>

There have been many changes between Jetty 9.2 and 9.4 in many places.
As an example, we have improved and made more compliant/strict HTTP
parsing, and that may be the cause of your POSTs not going through.

The best thing you can do to figure out this issue is to grab a
network trace via Wireshark on the Jetty host, along with possibly a
stack trace (that you did not report and you don't even say if it's on
client or on server), and possibly server DEBUG logs.

The network trace will be important to know who's closing the
connection. Stack traces and logs to know why it was closed.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Tommy Becker
Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?






On Thu, Sep 20, 2018 at 5:14 AM Simone Bordet <[hidden email]> wrote:
Hi,

On Wed, Sep 19, 2018 at 4:06 PM Tommy Becker <[hidden email]> wrote:
>
> Hi folks,
> We recently upgraded our application from Jetty 9.2 to 9.4. After doing so, we've noticed a lot of Early EOF stacktraces happening on POST requests. Of note, our application is fronted by a Varnish cache, which holds connections to the Jetty backend open indefinitely, even though we have the Jetty idle timeout configured to 50s. We're aware this is sub optimal, but it has been this way for some time, and worked fine under Jetty 9.2. To be clear, we do not have any evidence, or even believe that Jetty is doing anything wrong here per-se. But I understand that a fair bit of work happened in Jetty in this area (timeout/socket close) between 9.2 and 9.4 and I'm trying to get a high level description of what changed in an attempt to explain the behavior we're seeing now. Any info you could provide would be appreciated!
>

There have been many changes between Jetty 9.2 and 9.4 in many places.
As an example, we have improved and made more compliant/strict HTTP
parsing, and that may be the cause of your POSTs not going through.

The best thing you can do to figure out this issue is to grab a
network trace via Wireshark on the Jetty host, along with possibly a
stack trace (that you did not report and you don't even say if it's on
client or on server), and possibly server DEBUG logs.

The network trace will be important to know who's closing the
connection. Stack traces and logs to know why it was closed.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Tommy Becker
Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.

On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:
Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?






On Thu, Sep 20, 2018 at 5:14 AM Simone Bordet <[hidden email]> wrote:
Hi,

On Wed, Sep 19, 2018 at 4:06 PM Tommy Becker <[hidden email]> wrote:
>
> Hi folks,
> We recently upgraded our application from Jetty 9.2 to 9.4. After doing so, we've noticed a lot of Early EOF stacktraces happening on POST requests. Of note, our application is fronted by a Varnish cache, which holds connections to the Jetty backend open indefinitely, even though we have the Jetty idle timeout configured to 50s. We're aware this is sub optimal, but it has been this way for some time, and worked fine under Jetty 9.2. To be clear, we do not have any evidence, or even believe that Jetty is doing anything wrong here per-se. But I understand that a fair bit of work happened in Jetty in this area (timeout/socket close) between 9.2 and 9.4 and I'm trying to get a high level description of what changed in an attempt to explain the behavior we're seeing now. Any info you could provide would be appreciated!
>

There have been many changes between Jetty 9.2 and 9.4 in many places.
As an example, we have improved and made more compliant/strict HTTP
parsing, and that may be the cause of your POSTs not going through.

The best thing you can do to figure out this issue is to grab a
network trace via Wireshark on the Jetty host, along with possibly a
stack trace (that you did not report and you don't even say if it's on
client or on server), and possibly server DEBUG logs.

The network trace will be important to know who's closing the
connection. Stack traces and logs to know why it was closed.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Simone Bordet-3
Hi,

On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[hidden email]> wrote:

>
> Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.
>
> On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:
>>
>> Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.
>>
>> 1) Varnish sends the headers and initial part of the content for a large POST.
>> 2) On the Jetty server, we use a streaming parser and begin validating the content.
>> 3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
>> 4) An ACK is sent for the segment containing the 400 error.
>> 5) The Jetty server sends a FIN.
>> 6) An ACK is sent for the FIN
>> 7) Varnish sends another segment that continues the content from #1.
>> 8) The Jetty server sends a RST.
>>
>> In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?
>>

This is normal.
In Jetty 9.4 we are more aggressive in closing the connection because
we don't want to be at the mercy of a possible nasty client sending us
GiB of data when we know the application does not want to handle them.
Varnish behavior is correct too: it sees the FIN from Jetty but does
not know that Jetty does not want to read until it tries to send more
content and gets a RST.
At that point, it should relay the RST (or FIN) back to the client.

So you have 2 choices: you catch the exception during your validation,
and finish to read (and discard) the content in the application; or
you ignore the early EOFs in the logs.
I don't think that those early EOFs are logged above DEBUG level, is
that correct?

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Tommy Becker
Thanks so much again for your response, this is great information. What you say makes sense, but I now see I failed to mention the most critical part of this problem. Which is that the client never actually sees the 400 response we are sending from Jetty. When varnish sees the RST, it considers the backend request failed and returns 503 Service Unavailable to the client, effectively swallowing our application’s response. We can pursue a solution to this on the Varnish side, but in the interim I’m guessing there is no way to configure this behavior in Jetty?



On Sep 25, 2018, at 4:28 PM, Simone Bordet <[hidden email]> wrote:

Hi,

On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[hidden email]> wrote:

Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.

On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:

Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?


This is normal.
In Jetty 9.4 we are more aggressive in closing the connection because
we don't want to be at the mercy of a possible nasty client sending us
GiB of data when we know the application does not want to handle them.
Varnish behavior is correct too: it sees the FIN from Jetty but does
not know that Jetty does not want to read until it tries to send more
content and gets a RST.
At that point, it should relay the RST (or FIN) back to the client.

So you have 2 choices: you catch the exception during your validation,
and finish to read (and discard) the content in the application; or
you ignore the early EOFs in the logs.
I don't think that those early EOFs are logged above DEBUG level, is
that correct?

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Greg Wilkins

Thomas,

There is no configuration to avoid this behaviour.  If jetty sees and exception in the application it will send the 400 and close the connection.

However, as Simone says, your application can be setup to avoid this situation by catching the exception and consuming any input.  You can do this in a filter that catches Throwable, it can then check the request input stream (and/or reader) for unconsumed input and read & discard to end of file.   If the response is not committed, it can then send a 400 or any other response that you like.

Just remember that this may make your application somewhat vulnerable to DOS attacks as it will be easy to hold a thread in that filter slowly consuming data.  I would suggest imposing a total time and total data limit on the input consumption.

Note that for normal unconsumed input, jetty 9.4 does make some attempt to consume it... but if the reading of that data would block, it gives up and closes the connection, as there is no point blocking for data that will be discarded.

regards










On Wed, 26 Sep 2018 at 07:35, Thomas Becker <[hidden email]> wrote:
Thanks so much again for your response, this is great information. What you say makes sense, but I now see I failed to mention the most critical part of this problem. Which is that the client never actually sees the 400 response we are sending from Jetty. When varnish sees the RST, it considers the backend request failed and returns 503 Service Unavailable to the client, effectively swallowing our application’s response. We can pursue a solution to this on the Varnish side, but in the interim I’m guessing there is no way to configure this behavior in Jetty?



On Sep 25, 2018, at 4:28 PM, Simone Bordet <[hidden email]> wrote:

Hi,

On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[hidden email]> wrote:

Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.

On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:

Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?


This is normal.
In Jetty 9.4 we are more aggressive in closing the connection because
we don't want to be at the mercy of a possible nasty client sending us
GiB of data when we know the application does not want to handle them.
Varnish behavior is correct too: it sees the FIN from Jetty but does
not know that Jetty does not want to read until it tries to send more
content and gets a RST.
At that point, it should relay the RST (or FIN) back to the client.

So you have 2 choices: you catch the exception during your validation,
and finish to read (and discard) the content in the application; or
you ignore the early EOFs in the logs.
I don't think that those early EOFs are logged above DEBUG level, is
that correct?

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Tommy Becker
Thanks Greg. Just so I’m clear, what does Jetty key on to know whether to close the connection? Just the 4xx/5xx response code? I’m trying to understand the difference between this case and the “normal unconsumed input” case you describe. Also, I did notice that Jetty does not set the Connection: close header when it does this, is that intentional?


On Sep 25, 2018, at 6:37 PM, Greg Wilkins <[hidden email]> wrote:


Thomas,

There is no configuration to avoid this behaviour.  If jetty sees and exception in the application it will send the 400 and close the connection.

However, as Simone says, your application can be setup to avoid this situation by catching the exception and consuming any input.  You can do this in a filter that catches Throwable, it can then check the request input stream (and/or reader) for unconsumed input and read & discard to end of file.   If the response is not committed, it can then send a 400 or any other response that you like.

Just remember that this may make your application somewhat vulnerable to DOS attacks as it will be easy to hold a thread in that filter slowly consuming data.  I would suggest imposing a total time and total data limit on the input consumption.

Note that for normal unconsumed input, jetty 9.4 does make some attempt to consume it... but if the reading of that data would block, it gives up and closes the connection, as there is no point blocking for data that will be discarded.

regards










On Wed, 26 Sep 2018 at 07:35, Thomas Becker <[hidden email]> wrote:
Thanks so much again for your response, this is great information. What you say makes sense, but I now see I failed to mention the most critical part of this problem. Which is that the client never actually sees the 400 response we are sending from Jetty. When varnish sees the RST, it considers the backend request failed and returns 503 Service Unavailable to the client, effectively swallowing our application’s response. We can pursue a solution to this on the Varnish side, but in the interim I’m guessing there is no way to configure this behavior in Jetty?



On Sep 25, 2018, at 4:28 PM, Simone Bordet <[hidden email]> wrote:

Hi,

On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[hidden email]> wrote:

Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.

On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:

Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?


This is normal.
In Jetty 9.4 we are more aggressive in closing the connection because
we don't want to be at the mercy of a possible nasty client sending us
GiB of data when we know the application does not want to handle them.
Varnish behavior is correct too: it sees the FIN from Jetty but does
not know that Jetty does not want to read until it tries to send more
content and gets a RST.
At that point, it should relay the RST (or FIN) back to the client.

So you have 2 choices: you catch the exception during your validation,
and finish to read (and discard) the content in the application; or
you ignore the early EOFs in the logs.
I don't think that those early EOFs are logged above DEBUG level, is
that correct?

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Greg Wilkins

It is more about how the response was generated and less about the response code itself.
If the application throws and exception to Jetty during request handling, we now always make the connection non persistent before trying to send a response. If the request input is terminated early or is not fully consumed and would block, then we also abort the connection.

Interesting that you say we don't set the Connection: close header.  There is actually no requirement to do so as the server can close a connection at any time, but I thought we would do so as a courtesy.... checking....

cheers



On Wed, 26 Sep 2018 at 10:25, Tommy Becker <[hidden email]> wrote:
Thanks Greg. Just so I’m clear, what does Jetty key on to know whether to close the connection? Just the 4xx/5xx response code? I’m trying to understand the difference between this case and the “normal unconsumed input” case you describe. Also, I did notice that Jetty does not set the Connection: close header when it does this, is that intentional?


On Sep 25, 2018, at 6:37 PM, Greg Wilkins <[hidden email]> wrote:


Thomas,

There is no configuration to avoid this behaviour.  If jetty sees and exception in the application it will send the 400 and close the connection.

However, as Simone says, your application can be setup to avoid this situation by catching the exception and consuming any input.  You can do this in a filter that catches Throwable, it can then check the request input stream (and/or reader) for unconsumed input and read & discard to end of file.   If the response is not committed, it can then send a 400 or any other response that you like.

Just remember that this may make your application somewhat vulnerable to DOS attacks as it will be easy to hold a thread in that filter slowly consuming data.  I would suggest imposing a total time and total data limit on the input consumption.

Note that for normal unconsumed input, jetty 9.4 does make some attempt to consume it... but if the reading of that data would block, it gives up and closes the connection, as there is no point blocking for data that will be discarded.

regards










On Wed, 26 Sep 2018 at 07:35, Thomas Becker <[hidden email]> wrote:
Thanks so much again for your response, this is great information. What you say makes sense, but I now see I failed to mention the most critical part of this problem. Which is that the client never actually sees the 400 response we are sending from Jetty. When varnish sees the RST, it considers the backend request failed and returns 503 Service Unavailable to the client, effectively swallowing our application’s response. We can pursue a solution to this on the Varnish side, but in the interim I’m guessing there is no way to configure this behavior in Jetty?



On Sep 25, 2018, at 4:28 PM, Simone Bordet <[hidden email]> wrote:

Hi,

On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[hidden email]> wrote:

Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.

On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:

Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?


This is normal.
In Jetty 9.4 we are more aggressive in closing the connection because
we don't want to be at the mercy of a possible nasty client sending us
GiB of data when we know the application does not want to handle them.
Varnish behavior is correct too: it sees the FIN from Jetty but does
not know that Jetty does not want to read until it tries to send more
content and gets a RST.
At that point, it should relay the RST (or FIN) back to the client.

So you have 2 choices: you catch the exception during your validation,
and finish to read (and discard) the content in the application; or
you ignore the early EOFs in the logs.
I don't think that those early EOFs are logged above DEBUG level, is
that correct?

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Greg Wilkins

Thomas,

I see a connection:close header when the connection is closed after sending a 400 or 500.

However, nothing is sent if the connection is closed due to an abort while giving up reading unconsumed content, which can happen before/during/after a response hence we keep that simple.

So are you sure you are seeing a 400/500 response without connection:close ?




On Wed, 26 Sep 2018 at 14:33, Greg Wilkins <[hidden email]> wrote:

It is more about how the response was generated and less about the response code itself.
If the application throws and exception to Jetty during request handling, we now always make the connection non persistent before trying to send a response. If the request input is terminated early or is not fully consumed and would block, then we also abort the connection.

Interesting that you say we don't set the Connection: close header.  There is actually no requirement to do so as the server can close a connection at any time, but I thought we would do so as a courtesy.... checking....

cheers



On Wed, 26 Sep 2018 at 10:25, Tommy Becker <[hidden email]> wrote:
Thanks Greg. Just so I’m clear, what does Jetty key on to know whether to close the connection? Just the 4xx/5xx response code? I’m trying to understand the difference between this case and the “normal unconsumed input” case you describe. Also, I did notice that Jetty does not set the Connection: close header when it does this, is that intentional?


On Sep 25, 2018, at 6:37 PM, Greg Wilkins <[hidden email]> wrote:


Thomas,

There is no configuration to avoid this behaviour.  If jetty sees and exception in the application it will send the 400 and close the connection.

However, as Simone says, your application can be setup to avoid this situation by catching the exception and consuming any input.  You can do this in a filter that catches Throwable, it can then check the request input stream (and/or reader) for unconsumed input and read & discard to end of file.   If the response is not committed, it can then send a 400 or any other response that you like.

Just remember that this may make your application somewhat vulnerable to DOS attacks as it will be easy to hold a thread in that filter slowly consuming data.  I would suggest imposing a total time and total data limit on the input consumption.

Note that for normal unconsumed input, jetty 9.4 does make some attempt to consume it... but if the reading of that data would block, it gives up and closes the connection, as there is no point blocking for data that will be discarded.

regards










On Wed, 26 Sep 2018 at 07:35, Thomas Becker <[hidden email]> wrote:
Thanks so much again for your response, this is great information. What you say makes sense, but I now see I failed to mention the most critical part of this problem. Which is that the client never actually sees the 400 response we are sending from Jetty. When varnish sees the RST, it considers the backend request failed and returns 503 Service Unavailable to the client, effectively swallowing our application’s response. We can pursue a solution to this on the Varnish side, but in the interim I’m guessing there is no way to configure this behavior in Jetty?



On Sep 25, 2018, at 4:28 PM, Simone Bordet <[hidden email]> wrote:

Hi,

On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[hidden email]> wrote:

Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.

On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:

Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?


This is normal.
In Jetty 9.4 we are more aggressive in closing the connection because
we don't want to be at the mercy of a possible nasty client sending us
GiB of data when we know the application does not want to handle them.
Varnish behavior is correct too: it sees the FIN from Jetty but does
not know that Jetty does not want to read until it tries to send more
content and gets a RST.
At that point, it should relay the RST (or FIN) back to the client.

So you have 2 choices: you catch the exception during your validation,
and finish to read (and discard) the content in the application; or
you ignore the early EOFs in the logs.
I don't think that those early EOFs are logged above DEBUG level, is
that correct?

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--


--

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Tommy Becker
We definitely do not see one. But I'm still a bit confused as to how Jetty is determining that it wants to close the connection. Although our JAX-RS resource throws an exception, that exception should be handled by the JAX-RS runtime and not propagated to Jetty (We have ExceptionMappers defined for everything). So our application is generating the response and not Jetty itself.

On Wed, Sep 26, 2018 at 12:49 AM Greg Wilkins <[hidden email]> wrote:

Thomas,

I see a connection:close header when the connection is closed after sending a 400 or 500.

However, nothing is sent if the connection is closed due to an abort while giving up reading unconsumed content, which can happen before/during/after a response hence we keep that simple.

So are you sure you are seeing a 400/500 response without connection:close ?




On Wed, 26 Sep 2018 at 14:33, Greg Wilkins <[hidden email]> wrote:

It is more about how the response was generated and less about the response code itself.
If the application throws and exception to Jetty during request handling, we now always make the connection non persistent before trying to send a response. If the request input is terminated early or is not fully consumed and would block, then we also abort the connection.

Interesting that you say we don't set the Connection: close header.  There is actually no requirement to do so as the server can close a connection at any time, but I thought we would do so as a courtesy.... checking....

cheers



On Wed, 26 Sep 2018 at 10:25, Tommy Becker <[hidden email]> wrote:
Thanks Greg. Just so I’m clear, what does Jetty key on to know whether to close the connection? Just the 4xx/5xx response code? I’m trying to understand the difference between this case and the “normal unconsumed input” case you describe. Also, I did notice that Jetty does not set the Connection: close header when it does this, is that intentional?


On Sep 25, 2018, at 6:37 PM, Greg Wilkins <[hidden email]> wrote:


Thomas,

There is no configuration to avoid this behaviour.  If jetty sees and exception in the application it will send the 400 and close the connection.

However, as Simone says, your application can be setup to avoid this situation by catching the exception and consuming any input.  You can do this in a filter that catches Throwable, it can then check the request input stream (and/or reader) for unconsumed input and read & discard to end of file.   If the response is not committed, it can then send a 400 or any other response that you like.

Just remember that this may make your application somewhat vulnerable to DOS attacks as it will be easy to hold a thread in that filter slowly consuming data.  I would suggest imposing a total time and total data limit on the input consumption.

Note that for normal unconsumed input, jetty 9.4 does make some attempt to consume it... but if the reading of that data would block, it gives up and closes the connection, as there is no point blocking for data that will be discarded.

regards










On Wed, 26 Sep 2018 at 07:35, Thomas Becker <[hidden email]> wrote:
Thanks so much again for your response, this is great information. What you say makes sense, but I now see I failed to mention the most critical part of this problem. Which is that the client never actually sees the 400 response we are sending from Jetty. When varnish sees the RST, it considers the backend request failed and returns 503 Service Unavailable to the client, effectively swallowing our application’s response. We can pursue a solution to this on the Varnish side, but in the interim I’m guessing there is no way to configure this behavior in Jetty?



On Sep 25, 2018, at 4:28 PM, Simone Bordet <[hidden email]> wrote:

Hi,

On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[hidden email]> wrote:

Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.

On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:

Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?


This is normal.
In Jetty 9.4 we are more aggressive in closing the connection because
we don't want to be at the mercy of a possible nasty client sending us
GiB of data when we know the application does not want to handle them.
Varnish behavior is correct too: it sees the FIN from Jetty but does
not know that Jetty does not want to read until it tries to send more
content and gets a RST.
At that point, it should relay the RST (or FIN) back to the client.

So you have 2 choices: you catch the exception during your validation,
and finish to read (and discard) the content in the application; or
you ignore the early EOFs in the logs.
I don't think that those early EOFs are logged above DEBUG level, is
that correct?

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Joakim Erdfelt-8
Not sure if this is the case, but the "Connection: close" header cannot be added by Jetty if the HttpServletResponse.isCommitted() == true before using something like sendError().

Joakim Erdfelt / [hidden email]


On Wed, Sep 26, 2018 at 6:29 AM Tommy Becker <[hidden email]> wrote:
We definitely do not see one. But I'm still a bit confused as to how Jetty is determining that it wants to close the connection. Although our JAX-RS resource throws an exception, that exception should be handled by the JAX-RS runtime and not propagated to Jetty (We have ExceptionMappers defined for everything). So our application is generating the response and not Jetty itself.

On Wed, Sep 26, 2018 at 12:49 AM Greg Wilkins <[hidden email]> wrote:

Thomas,

I see a connection:close header when the connection is closed after sending a 400 or 500.

However, nothing is sent if the connection is closed due to an abort while giving up reading unconsumed content, which can happen before/during/after a response hence we keep that simple.

So are you sure you are seeing a 400/500 response without connection:close ?




On Wed, 26 Sep 2018 at 14:33, Greg Wilkins <[hidden email]> wrote:

It is more about how the response was generated and less about the response code itself.
If the application throws and exception to Jetty during request handling, we now always make the connection non persistent before trying to send a response. If the request input is terminated early or is not fully consumed and would block, then we also abort the connection.

Interesting that you say we don't set the Connection: close header.  There is actually no requirement to do so as the server can close a connection at any time, but I thought we would do so as a courtesy.... checking....

cheers



On Wed, 26 Sep 2018 at 10:25, Tommy Becker <[hidden email]> wrote:
Thanks Greg. Just so I’m clear, what does Jetty key on to know whether to close the connection? Just the 4xx/5xx response code? I’m trying to understand the difference between this case and the “normal unconsumed input” case you describe. Also, I did notice that Jetty does not set the Connection: close header when it does this, is that intentional?


On Sep 25, 2018, at 6:37 PM, Greg Wilkins <[hidden email]> wrote:


Thomas,

There is no configuration to avoid this behaviour.  If jetty sees and exception in the application it will send the 400 and close the connection.

However, as Simone says, your application can be setup to avoid this situation by catching the exception and consuming any input.  You can do this in a filter that catches Throwable, it can then check the request input stream (and/or reader) for unconsumed input and read & discard to end of file.   If the response is not committed, it can then send a 400 or any other response that you like.

Just remember that this may make your application somewhat vulnerable to DOS attacks as it will be easy to hold a thread in that filter slowly consuming data.  I would suggest imposing a total time and total data limit on the input consumption.

Note that for normal unconsumed input, jetty 9.4 does make some attempt to consume it... but if the reading of that data would block, it gives up and closes the connection, as there is no point blocking for data that will be discarded.

regards










On Wed, 26 Sep 2018 at 07:35, Thomas Becker <[hidden email]> wrote:
Thanks so much again for your response, this is great information. What you say makes sense, but I now see I failed to mention the most critical part of this problem. Which is that the client never actually sees the 400 response we are sending from Jetty. When varnish sees the RST, it considers the backend request failed and returns 503 Service Unavailable to the client, effectively swallowing our application’s response. We can pursue a solution to this on the Varnish side, but in the interim I’m guessing there is no way to configure this behavior in Jetty?



On Sep 25, 2018, at 4:28 PM, Simone Bordet <[hidden email]> wrote:

Hi,

On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[hidden email]> wrote:

Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.

On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:

Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?


This is normal.
In Jetty 9.4 we are more aggressive in closing the connection because
we don't want to be at the mercy of a possible nasty client sending us
GiB of data when we know the application does not want to handle them.
Varnish behavior is correct too: it sees the FIN from Jetty but does
not know that Jetty does not want to read until it tries to send more
content and gets a RST.
At that point, it should relay the RST (or FIN) back to the client.

So you have 2 choices: you catch the exception during your validation,
and finish to read (and discard) the content in the application; or
you ignore the early EOFs in the logs.
I don't think that those early EOFs are logged above DEBUG level, is
that correct?

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Tommy Becker
Makes sense, I wondered if that might be the case. I'm still curious how/where it decides to close the connection if it doesn't see the exception.

On Wed, Sep 26, 2018 at 7:39 AM Joakim Erdfelt <[hidden email]> wrote:
Not sure if this is the case, but the "Connection: close" header cannot be added by Jetty if the HttpServletResponse.isCommitted() == true before using something like sendError().

Joakim Erdfelt / [hidden email]


On Wed, Sep 26, 2018 at 6:29 AM Tommy Becker <[hidden email]> wrote:
We definitely do not see one. But I'm still a bit confused as to how Jetty is determining that it wants to close the connection. Although our JAX-RS resource throws an exception, that exception should be handled by the JAX-RS runtime and not propagated to Jetty (We have ExceptionMappers defined for everything). So our application is generating the response and not Jetty itself.

On Wed, Sep 26, 2018 at 12:49 AM Greg Wilkins <[hidden email]> wrote:

Thomas,

I see a connection:close header when the connection is closed after sending a 400 or 500.

However, nothing is sent if the connection is closed due to an abort while giving up reading unconsumed content, which can happen before/during/after a response hence we keep that simple.

So are you sure you are seeing a 400/500 response without connection:close ?




On Wed, 26 Sep 2018 at 14:33, Greg Wilkins <[hidden email]> wrote:

It is more about how the response was generated and less about the response code itself.
If the application throws and exception to Jetty during request handling, we now always make the connection non persistent before trying to send a response. If the request input is terminated early or is not fully consumed and would block, then we also abort the connection.

Interesting that you say we don't set the Connection: close header.  There is actually no requirement to do so as the server can close a connection at any time, but I thought we would do so as a courtesy.... checking....

cheers



On Wed, 26 Sep 2018 at 10:25, Tommy Becker <[hidden email]> wrote:
Thanks Greg. Just so I’m clear, what does Jetty key on to know whether to close the connection? Just the 4xx/5xx response code? I’m trying to understand the difference between this case and the “normal unconsumed input” case you describe. Also, I did notice that Jetty does not set the Connection: close header when it does this, is that intentional?


On Sep 25, 2018, at 6:37 PM, Greg Wilkins <[hidden email]> wrote:


Thomas,

There is no configuration to avoid this behaviour.  If jetty sees and exception in the application it will send the 400 and close the connection.

However, as Simone says, your application can be setup to avoid this situation by catching the exception and consuming any input.  You can do this in a filter that catches Throwable, it can then check the request input stream (and/or reader) for unconsumed input and read & discard to end of file.   If the response is not committed, it can then send a 400 or any other response that you like.

Just remember that this may make your application somewhat vulnerable to DOS attacks as it will be easy to hold a thread in that filter slowly consuming data.  I would suggest imposing a total time and total data limit on the input consumption.

Note that for normal unconsumed input, jetty 9.4 does make some attempt to consume it... but if the reading of that data would block, it gives up and closes the connection, as there is no point blocking for data that will be discarded.

regards










On Wed, 26 Sep 2018 at 07:35, Thomas Becker <[hidden email]> wrote:
Thanks so much again for your response, this is great information. What you say makes sense, but I now see I failed to mention the most critical part of this problem. Which is that the client never actually sees the 400 response we are sending from Jetty. When varnish sees the RST, it considers the backend request failed and returns 503 Service Unavailable to the client, effectively swallowing our application’s response. We can pursue a solution to this on the Varnish side, but in the interim I’m guessing there is no way to configure this behavior in Jetty?



On Sep 25, 2018, at 4:28 PM, Simone Bordet <[hidden email]> wrote:

Hi,

On Tue, Sep 25, 2018 at 8:34 PM Tommy Becker <[hidden email]> wrote:

Update: we setup an environment with the old Jetty 9.2 code and this does not occur. 9.2 does not send the FIN in #5 above, and seems happy to receive the rest of the content, despite having sent a response already.

On Tue, Sep 25, 2018 at 10:01 AM Tommy Becker <[hidden email]> wrote:

Thanks for your response. I managed to snag a tcp dump of what's going on in this scenario. From what I can see the sequence of events is the following. Recall that our Jetty server is fronted by a Varnish cache.

1) Varnish sends the headers and initial part of the content for a large POST.
2) On the Jetty server, we use a streaming parser and begin validating the content.
3) We detect a problem with the content and throw an exception that results in a 400 Bad Request to the client (via JAX-RS exception mapper)
4) An ACK is sent for the segment containing the 400 error.
5) The Jetty server sends a FIN.
6) An ACK is sent for the FIN
7) Varnish sends another segment that continues the content from #1.
8) The Jetty server sends a RST.

In the server logs, we see an Early EOF from our JAX-RS resource that is parsing the content. This all seems pretty ok from the Jetty side, and it certainly seems like Varnish is misbehaving here (I'm thinking it may be this bug https://github.com/varnishcache/varnish-cache/issues/2332).  But I'm still unclear as to why this started after our upgrade from Jetty 9.2 -> 9.4. Any thoughts?


This is normal.
In Jetty 9.4 we are more aggressive in closing the connection because
we don't want to be at the mercy of a possible nasty client sending us
GiB of data when we know the application does not want to handle them.
Varnish behavior is correct too: it sees the FIN from Jetty but does
not know that Jetty does not want to read until it tries to send more
content and gets a RST.
At that point, it should relay the RST (or FIN) back to the client.

So you have 2 choices: you catch the exception during your validation,
and finish to read (and discard) the content in the application; or
you ignore the early EOFs in the logs.
I don't think that those early EOFs are logged above DEBUG level, is
that correct?

-- 
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Simone Bordet-3
In reply to this post by Tommy Becker
Hi,

On Wed, Sep 26, 2018 at 1:29 PM Tommy Becker <[hidden email]> wrote:
>
> We definitely do not see one. But I'm still a bit confused as to how Jetty is determining that it wants to close the connection. Although our JAX-RS resource throws an exception, that exception should be handled by the JAX-RS runtime and not propagated to Jetty (We have ExceptionMappers defined for everything). So our application is generating the response and not Jetty itself.
>

After generating the response, the control goes back to Jetty.
There, Jetty finds that there is unread content left in some buffers,
and consumes it.
After consuming the content that has already arrived to the server,
Jetty figures out that not all of it has arrived.
Now Jetty cannot wait for more content to arrive, because there is
nothing to give the content to, as the application is already done and
has already responded.
And Jetty itself cannot block a thread and wait for the rest of the
content to arrive because that would be an attack vector for nasty
clients.
The only choice is for Jetty to close the connection.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Greg Wilkins

Thomas,

Exactly what Simone said!

Jetty is not closing the connection because an error response was sent. It is JAX-RS generating the error page, and it chooses not to add connection:close.

But it also has not read all the content, so jetty aborts the connection when it sees an incomplete request message that it would have to block to read the rest of it.

Again, you could add a filter to always consume the input... or modify JAX-RS error handler so it says connection:close

cheers






On Thu, 27 Sep 2018 at 07:59, Simone Bordet <[hidden email]> wrote:
Hi,

On Wed, Sep 26, 2018 at 1:29 PM Tommy Becker <[hidden email]> wrote:
>
> We definitely do not see one. But I'm still a bit confused as to how Jetty is determining that it wants to close the connection. Although our JAX-RS resource throws an exception, that exception should be handled by the JAX-RS runtime and not propagated to Jetty (We have ExceptionMappers defined for everything). So our application is generating the response and not Jetty itself.
>

After generating the response, the control goes back to Jetty.
There, Jetty finds that there is unread content left in some buffers,
and consumes it.
After consuming the content that has already arrived to the server,
Jetty figures out that not all of it has arrived.
Now Jetty cannot wait for more content to arrive, because there is
nothing to give the content to, as the application is already done and
has already responded.
And Jetty itself cannot block a thread and wait for the rest of the
content to arrive because that would be an attack vector for nasty
clients.
The only choice is for Jetty to close the connection.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Tommy Becker
Thanks again, you guys have been super helpful. I found where this happens in the Jetty code (I think), org.eclipse.jetty.server.HttpConnection#onCompleted.  Unfortunately, it appears Varnish cannot currently deal with this, and at least one guy on the list thinks this behavior violates the spec, see https://varnish-cache.org/lists/pipermail/varnish-misc/2018-September/026501.html.  From my reading of https://tools.ietf.org/html/rfc7230#section-6.6 it's a bit unclear.

-Tommy

On Wed, Sep 26, 2018 at 6:15 PM Greg Wilkins <[hidden email]> wrote:

Thomas,

Exactly what Simone said!

Jetty is not closing the connection because an error response was sent. It is JAX-RS generating the error page, and it chooses not to add connection:close.

But it also has not read all the content, so jetty aborts the connection when it sees an incomplete request message that it would have to block to read the rest of it.

Again, you could add a filter to always consume the input... or modify JAX-RS error handler so it says connection:close

cheers






On Thu, 27 Sep 2018 at 07:59, Simone Bordet <[hidden email]> wrote:
Hi,

On Wed, Sep 26, 2018 at 1:29 PM Tommy Becker <[hidden email]> wrote:
>
> We definitely do not see one. But I'm still a bit confused as to how Jetty is determining that it wants to close the connection. Although our JAX-RS resource throws an exception, that exception should be handled by the JAX-RS runtime and not propagated to Jetty (We have ExceptionMappers defined for everything). So our application is generating the response and not Jetty itself.
>

After generating the response, the control goes back to Jetty.
There, Jetty finds that there is unread content left in some buffers,
and consumes it.
After consuming the content that has already arrived to the server,
Jetty figures out that not all of it has arrived.
Now Jetty cannot wait for more content to arrive, because there is
nothing to give the content to, as the application is already done and
has already responded.
And Jetty itself cannot block a thread and wait for the rest of the
content to arrive because that would be an attack vector for nasty
clients.
The only choice is for Jetty to close the connection.

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Simone Bordet-3
Hi,

On Thu, Sep 27, 2018 at 8:11 PM Tommy Becker <[hidden email]> wrote:
>
> Thanks again, you guys have been super helpful. I found where this happens in the Jetty code (I think), org.eclipse.jetty.server.HttpConnection#onCompleted.  Unfortunately, it appears Varnish cannot currently deal with this, and at least one guy on the list thinks this behavior violates the spec, see https://varnish-cache.org/lists/pipermail/varnish-misc/2018-September/026501.html.

I disagree with the Varnish person.

HTTP/1.1 is a full duplex protocol.
The fact that the request has not been fully sent is not a reason to
not read the response, or to discard it.

Imagine a client that sends some data then stops. Varnish forwards the
data to Jetty. Jetty times out after a while because not all data is
arrived. Jetty produces a 408 response indicating the timeout.
Why should Varnish discard the 408 only because "it failed to send the
full request and won't bother reading the response" ?

--
Simone Bordet
----
http://cometd.org
http://webtide.com
Developer advice, training, services and support
from the Jetty & CometD experts.
_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Tommy Becker
I think the rub is that if a client has received but not yet read the response prior to the reset, the socket buffer is cleared and at that point it’s gone. Regardless, I have no stake on who is right or wrong here, it just seems like a pretty significant difference that was not easy to track down and will probably bite other folks too.

-Tommy


> On Sep 27, 2018, at 5:38 PM, Simone Bordet <[hidden email]> wrote:
>
> Hi,
>
> On Thu, Sep 27, 2018 at 8:11 PM Tommy Becker <[hidden email]> wrote:
>>
>> Thanks again, you guys have been super helpful. I found where this happens in the Jetty code (I think), org.eclipse.jetty.server.HttpConnection#onCompleted.  Unfortunately, it appears Varnish cannot currently deal with this, and at least one guy on the list thinks this behavior violates the spec, see https://varnish-cache.org/lists/pipermail/varnish-misc/2018-September/026501.html.
>
> I disagree with the Varnish person.
>
> HTTP/1.1 is a full duplex protocol.
> The fact that the request has not been fully sent is not a reason to
> not read the response, or to discard it.
>
> Imagine a client that sends some data then stops. Varnish forwards the
> data to Jetty. Jetty times out after a while because not all data is
> arrived. Jetty produces a 408 response indicating the timeout.
> Why should Varnish discard the 408 only because "it failed to send the
> full request and won't bother reading the response" ?
>
> --
> Simone Bordet
> ----
> http://cometd.org
> http://webtide.com
> Developer advice, training, services and support
> from the Jetty & CometD experts.
> _______________________________________________
> jetty-users mailing list
> [hidden email]
> To change your delivery options, retrieve your password, or unsubscribe from this list, visit
> https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users
Reply | Threaded
Open this post in threaded view
|

Re: Changes in Jetty socket close behavior from 9.2 -> 9.4

Greg Wilkins
Thomas,

sorry to make you the meat in the sandwich here.   Can you relay the following message and if that does not solve the issue, we will engage directly with the varnish mailing list.

As Simone says, HTTP >= 1.1 is a duplex protocol. So much so that there are even intermediate responses that can be sent prior to completion of request handling:  see https://tools.ietf.org/html/rfc7231#section-6.2 and https://tools.ietf.org/html/rfc2518#section-10.1

100 Continue responses are explicitly sent prior to the request body being sent (let alone read).

This is not an optimisation in Jetty.  The order of sending a response vs reading content is for the most part totally beyond our control. Consider the very simple application Servlet that does something like:

doGet(Request request, Response response)
{
   response.setStatus(200);
   response.setContentLength(12);
   response.getWriter().println("Hello World");
   response.flushBuffers();
   
   InputStream in = request.getInputStream();
   while (in.read()>0) {} ;

So here the application explicitly generates and flushes a response prior to even looking at the content.  Jetty cannot hold up sending/flushing the response until the content is read as we would have an unlimited buffering commitment and thus be vulnerable to DOS.

Also once a response is sent, we can't then be obligated to read content that the application itself did not read as we would again become vulnerable to DOS attacks.

I hope this clearly shows that the behaviour is permitted within HTTP and that it is beyond the control of Jetty to prevent it.

regards
  







On Fri, 28 Sep 2018 at 08:18, Thomas Becker <[hidden email]> wrote:
I think the rub is that if a client has received but not yet read the response prior to the reset, the socket buffer is cleared and at that point it’s gone. Regardless, I have no stake on who is right or wrong here, it just seems like a pretty significant difference that was not easy to track down and will probably bite other folks too.

-Tommy


> On Sep 27, 2018, at 5:38 PM, Simone Bordet <[hidden email]> wrote:
>
> Hi,
>
> On Thu, Sep 27, 2018 at 8:11 PM Tommy Becker <[hidden email]> wrote:
>>
>> Thanks again, you guys have been super helpful. I found where this happens in the Jetty code (I think), org.eclipse.jetty.server.HttpConnection#onCompleted.  Unfortunately, it appears Varnish cannot currently deal with this, and at least one guy on the list thinks this behavior violates the spec, see https://varnish-cache.org/lists/pipermail/varnish-misc/2018-September/026501.html.
>
> I disagree with the Varnish person.
>
> HTTP/1.1 is a full duplex protocol.
> The fact that the request has not been fully sent is not a reason to
> not read the response, or to discard it.
>
> Imagine a client that sends some data then stops. Varnish forwards the
> data to Jetty. Jetty times out after a while because not all data is
> arrived. Jetty produces a 408 response indicating the timeout.
> Why should Varnish discard the 408 only because "it failed to send the
> full request and won't bother reading the response" ?
>
> --
> Simone Bordet
> ----
> http://cometd.org
> http://webtide.com
> Developer advice, training, services and support
> from the Jetty & CometD experts.
> _______________________________________________
> jetty-users mailing list
> [hidden email]
> To change your delivery options, retrieve your password, or unsubscribe from this list, visit
> https://dev.eclipse.org/mailman/listinfo/jetty-users

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users


--

_______________________________________________
jetty-users mailing list
[hidden email]
To change your delivery options, retrieve your password, or unsubscribe from this list, visit
https://dev.eclipse.org/mailman/listinfo/jetty-users