[jetty-users] Getting incorrect request start times in Jetty 8.0

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

[jetty-users] Getting incorrect request start times in Jetty 8.0

David Stryker
'm trying to understand the root cause of a problem we're seeing with Jetty 8.0 that causes Request.getStartTime() to return a value from _after_ the request was processed rather than at the start of request processing.  This effecting all ~30 services in the Ness platform.  The services seem to be running fine, except that the elapsed times in the request logs are tiny; nearly all 0ms or 1ms.

I ran a server locally and used the debugger to trace the calls to Request.setTimeStamp().  It turns out that Request.setTimeStamp() is called twice in a single request, for reasons I don't understand.

I first hit the breakpoint in a call from AsyncHttpConnection.
startRequest(), which is where the timestamp _should_ be set.

Here's the full call stack:

Thread [qtp445620464-30] (Suspended (breakpoint at line 1874 in Request))
Request.setTimeStamp(long) line: 1874
AsyncHttpConnection(AbstractHttpConnection).startRequest(Buffer, Buffer, Buffer) line: 717
AbstractHttpConnection$RequestHandler.startRequest(Buffer, Buffer, Buffer) line: 926
HttpParser.parseNext() line: 482
HttpParser.parseAvailable() line: 230
AsyncHttpConnection.handle() line: 77
SelectChannelEndPoint.handle() line: 606
SelectChannelEndPoint$1.run() line: 46
QueuedThreadPool.runJob(Runnable) line: 603
QueuedThreadPool$3.run() line: 538
Thread.run() line: 680

Then, in the same request, also hit the breakpoint in a call from SelectChannelConnector.customize().

Here is the full call stack:

Thread [qtp445620464-30] (Suspended (breakpoint at line 1874 in Request))
Request.setTimeStamp(long) line: 1874
SelectChannelConnector.customize(EndPoint, Request) line: 132
AsyncHttpConnection(AbstractHttpConnection).handleRequest() line: 453
AsyncHttpConnection(AbstractHttpConnection).headerComplete() line: 890
AbstractHttpConnection$RequestHandler.headerComplete() line: 944
HttpParser.parseNext() line: 630
HttpParser.parseAvailable() line: 230
AsyncHttpConnection.handle() line: 77
SelectChannelEndPoint.handle() line: 606
SelectChannelEndPoint$1.run() line: 46
QueuedThreadPool.runJob(Runnable) line: 603
QueuedThreadPool$3.run() line: 538
Thread.run() line: 680

The call to headerComplete() at HttpParser.parseNext() line 6:30 is:

                                        case HttpTokens.NO_CONTENT:
                                            _handler.headerComplete();
                                            _state=_persistent||(_responseStatus>=100&&_responseStatus<200)?STATE_END:STATE_SEEKING_EOF;
                                            _handler.messageComplete(_contentPosition);
                                            return 1;

By this time, the request is over, so this is the bug. There is a comment right about the
code in SelectChannelConnector.customer() indicating that the code is used only by "redispatches":

            // Loop here to handle async request redispatches.
            // The loop is controlled by the call to async.unhandle in the
            // finally block below. If call is from a non-blocking connector,
            // then the unhandle will return false only if an async dispatch has
            // already happened when unhandle is called. For a blocking connector,
            // the wait for the asynchronous dispatch or timeout actually happens
            // within the call to unhandle().

Can anyone tell me what's going wrong here?  The most obvious answer is that the Jetty code is fine, and we at Ness are doing something wrong when setting up request filters, or something like that, but I haven't been able to figure it out.

I would appreciate any pointers.

Thanks,

David Stryker
Ness Computing

_______________________________________________
jetty-users mailing list
[hidden email]
https://dev.eclipse.org/mailman/listinfo/jetty-users