[Jetty-support] Setting buffer size from jetty.xml

classic Classic list List threaded Threaded
10 messages Options
Reply | Threaded
Open this post in threaded view
|

[Jetty-support] Setting buffer size from jetty.xml

Krystian Szczesny

Hi,

 

I’m trying to set the header and response buffer size from within jetty.xml file.

Unfortunately it doesn’t work.

Here’s my config regarding the connector:

    <Call name="addConnector">

      <Arg>

          <New class="org.mortbay.jetty.nio.SelectChannelConnector">

            <Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>

            <Set name="maxIdleTime">30000</Set>

            <Set name="Acceptors">2</Set>

            <Set name="statsOn">false</Set>

            <Set name="confidentialPort">8443</Set>

        <Set name="lowResourcesConnections">5000</Set>

        <Set name="lowResourcesMaxIdleTime">5000</Set>

        <Set name="responseBufferSize">65536</Set>

        <Set name="headerBufferSize">32768</Set>

          </New>

      </Arg>

    </Call>

 

Unfortunately, when I’m testing the responses I’m getting two fragments:

First part:

Length: 4096

Total: 4096

Second part:

Length: 2242

Total: 6338

 

How can I check if my configuration is actually working?

This jetty is embedded so I can’t put there log.debug(connector.getHeaderBufferSize())

 

If you could point me into right direction I would be grateful.

 

Best regards,

Krystian

 

--

Krystian Szczesny

This e-mail and any attachments are confidential and may also be legally
privileged and/or copyright material of Intec Telecom Systems PLC (or its
affiliated companies). If you are not an intended or authorised recipient
of this e-mail or have received it in error, please delete it immediately
and notify the sender by e-mail. In such a case, reading, reproducing,
printing or further dissemination of this e-mail or its contents is strictly
prohibited and may be unlawful.
Intec Telecom Systems PLC does not represent or warrant that an attachment
hereto is free from computer viruses or other defects. The opinions
expressed in this e-mail and any attachments may be those of the author and
are not necessarily those of Intec Telecom Systems PLC.

-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
Jetty-support mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jetty-support
Reply | Threaded
Open this post in threaded view
|

Re: [Jetty-support] Setting buffer size from jetty.xml

David Yu-2
Hi,

First of all, jetty's mailing list has been moved.
[hidden email]
[hidden email]
[hidden email]

Pretty much anything can be done(programmatically) from your jetty.xml.
You can call log info/debug from there:

    <Call name="addConnector">

      <Arg>

          <New id="connector" class="org.mortbay.jetty.nio.SelectChannelConnector">

            <Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>

            <Set name="maxIdleTime">30000</Set>

            <Set name="Acceptors">2</Set>

            <Set name="statsOn">false</Set>

            <Set name="confidentialPort">8443</Set>

        <Set name="lowResourcesConnections">5000</Set>

        <Set name="lowResourcesMaxIdleTime">5000</Set>

        <Set name="responseBufferSize">65536</Set>

        <Set name="headerBufferSize">32768</Set>

          </New>

      </Arg>

    </Call>

<Ref id="connector">
  <Get id="bufferSize" name="responseBufferSize"/>
</Ref>   

<Call class="org.mortbay.log.Log" name="info"><Arg>Response Buffer size:</Arg></Call>   
<Call class="org.mortbay.log.Log" name="info">
  <Arg>
    <Call class="java.lang.String" name="valueOf"><Arg><Ref id="bufferSize"/></Arg></Call>
  </Arg>
</Call>

Cheers

On Thu, Apr 24, 2008 at 11:41 PM, Krystian Szczesny <[hidden email]> wrote:

Hi,

 

I'm trying to set the header and response buffer size from within jetty.xml file.

Unfortunately it doesn't work.

Here's my config regarding the connector:

    <Call name="addConnector">

      <Arg>

          <New class="org.mortbay.jetty.nio.SelectChannelConnector">

            <Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>

            <Set name="maxIdleTime">30000</Set>

            <Set name="Acceptors">2</Set>

            <Set name="statsOn">false</Set>

            <Set name="confidentialPort">8443</Set>

        <Set name="lowResourcesConnections">5000</Set>

        <Set name="lowResourcesMaxIdleTime">5000</Set>

        <Set name="responseBufferSize">65536</Set>

        <Set name="headerBufferSize">32768</Set>

          </New>

      </Arg>

    </Call>

 

Unfortunately, when I'm testing the responses I'm getting two fragments:

First part:

Length: 4096

Total: 4096

Second part:

Length: 2242

Total: 6338

 

How can I check if my configuration is actually working?

This jetty is embedded so I can't put there log.debug(connector.getHeaderBufferSize())

 

If you could point me into right direction I would be grateful.

 

Best regards,

Krystian

 

--

Krystian Szczesny

This e-mail and any attachments are confidential and may also be legally
privileged and/or copyright material of Intec Telecom Systems PLC (or its
affiliated companies). If you are not an intended or authorised recipient
of this e-mail or have received it in error, please delete it immediately
and notify the sender by e-mail. In such a case, reading, reproducing,
printing or further dissemination of this e-mail or its contents is strictly
prohibited and may be unlawful.
Intec Telecom Systems PLC does not represent or warrant that an attachment
hereto is free from computer viruses or other defects. The opinions
expressed in this e-mail and any attachments may be those of the author and
are not necessarily those of Intec Telecom Systems PLC.

-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
Jetty-support mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jetty-support



-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
Jetty-support mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jetty-support
Reply | Threaded
Open this post in threaded view
|

RE: Re: [Jetty-support] Setting buffer size from jetty.xml

Krystian Szczesny

Hi again,

 

Sizes of my header and response buffers  are set to what I desire, unfortunately settings don’t reflect what’s happening after the start – or are not responsible for it.

I’m still getting a fragmented response 4K+2K instead of one 6K :/

Time between receiving first part and second part is over 60ms, which is way too much.

 

Could you please help me a little and point out what should I change to receive response in one part, not splitted?

 

Best regards,

Krystian

 

From: David Yu [mailto:[hidden email]]
Sent: 25 April 2008 04:11
To: Jetty usage, help & informal support
Cc: [hidden email]
Subject: [jetty-user] Re: [Jetty-support] Setting buffer size from jetty.xml

 

Hi,

First of all, jetty's mailing list has been moved.
[hidden email]
[hidden email]
[hidden email]

Pretty much anything can be done(programmatically) from your jetty.xml.
You can call log info/debug from there:

    <Call name="addConnector">

      <Arg>

          <New id="connector" class="org.mortbay.jetty.nio.SelectChannelConnector">

            <Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>

            <Set name="maxIdleTime">30000</Set>

            <Set name="Acceptors">2</Set>

            <Set name="statsOn">false</Set>

            <Set name="confidentialPort">8443</Set>

        <Set name="lowResourcesConnections">5000</Set>

        <Set name="lowResourcesMaxIdleTime">5000</Set>

        <Set name="responseBufferSize">65536</Set>

        <Set name="headerBufferSize">32768</Set>

          </New>

      </Arg>

    </Call>

<Ref id="connector">
  <Get id="bufferSize" name="responseBufferSize"/>
</Ref>   

<Call class="org.mortbay.log.Log" name="info"><Arg>Response Buffer size:</Arg></Call>   
<Call class="org.mortbay.log.Log" name="info">
  <Arg>
    <Call class="java.lang.String" name="valueOf"><Arg><Ref id="bufferSize"/></Arg></Call>
  </Arg>
</Call>

Cheers

On Thu, Apr 24, 2008 at 11:41 PM, Krystian Szczesny <[hidden email]> wrote:

Hi,

 

I'm trying to set the header and response buffer size from within jetty.xml file.

Unfortunately it doesn't work.

Here's my config regarding the connector:

    <Call name="addConnector">

      <Arg>

          <New class="org.mortbay.jetty.nio.SelectChannelConnector">

            <Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>

            <Set name="maxIdleTime">30000</Set>

            <Set name="Acceptors">2</Set>

            <Set name="statsOn">false</Set>

            <Set name="confidentialPort">8443</Set>

        <Set name="lowResourcesConnections">5000</Set>

        <Set name="lowResourcesMaxIdleTime">5000</Set>

        <Set name="responseBufferSize">65536</Set>

        <Set name="headerBufferSize">32768</Set>

          </New>

      </Arg>

    </Call>

 

Unfortunately, when I'm testing the responses I'm getting two fragments:

First part:

Length: 4096

Total: 4096

Second part:

Length: 2242

Total: 6338

 

How can I check if my configuration is actually working?

This jetty is embedded so I can't put there log.debug(connector.getHeaderBufferSize())

 

If you could point me into right direction I would be grateful.

 

Best regards,

Krystian

 

--

Krystian Szczesny

This e-mail and any attachments are confidential and may also be legally
privileged and/or copyright material of Intec Telecom Systems PLC (or its
affiliated companies). If you are not an intended or authorised recipient
of this e-mail or have received it in error, please delete it immediately
and notify the sender by e-mail. In such a case, reading, reproducing,
printing or further dissemination of this e-mail or its contents is strictly
prohibited and may be unlawful.
Intec Telecom Systems PLC does not represent or warrant that an attachment
hereto is free from computer viruses or other defects. The opinions
expressed in this e-mail and any attachments may be those of the author and
are not necessarily those of Intec Telecom Systems PLC.


-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
Jetty-support mailing list
[hidden email]
https://lists.sourceforge.net/lists/listinfo/jetty-support

 

Reply | Threaded
Open this post in threaded view
|

Re[2]: Re: [Jetty-support] Setting buffer size from jetty.xml

Chris Haynes
Just a thought on the delay aspect of this. Could it be the Nagle Algorithm striking again? (Google for 'Nagle TCP').

Is this Jetty instance running on Windows (where I've seen Nagle-induced delays of 30mS and more)?

There is a Jetty option on the sockets called something like noTcpDelay. Try finding and setting it.

Beyond that there is usually an Operating System option changing the Nagle delay - try changing that setting and watch to see if the delay you observe changes.

On the fragmentation, are there any spurious flush() calls on the output stream? Theat might cause fragmentation in your configuration.

Sorry I can't remember exact API details.

Chris



On Friday, April 25, 2008 at 9:03:44 AM, Krystian Szczesny wrote:
> Hi again,

>  

> Sizes of my header and response buffers  are set to what I desire,
> unfortunately settings don't reflect what's happening after the start -
> or are not responsible for it.

> I'm still getting a fragmented response 4K+2K instead of one 6K :/

> Time between receiving first part and second part is over 60ms, which is
> way too much.

>  

> Could you please help me a little and point out what should I change to
> receive response in one part, not splitted?

>  

> Best regards,

> Krystian

>


---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email


Reply | Threaded
Open this post in threaded view
|

RE: Re[2]: Re: [Jetty-support] Setting buffer size from jetty.xml

Krystian Szczesny
I'm running it on Linux and HP-UX at the moment.
I will try the noTcpDelay but I think disabling the chunking would be
the best way.

I am using Jetty to host Axis2 web services and all I am going to send
are soap responses. I can more or less predict how big they are going to
be [less than 30K].

I've looked through Jetty docs and searched on google on how to disable
chunking but was not able to find anything useful.

Anyone?

Best regards,
Krystian

-----Original Message-----
From: Chris Haynes [mailto:[hidden email]]
Sent: 25 April 2008 10:56
To: Krystian Szczesny
Subject: Re[2]: [jetty-user] Re: [Jetty-support] Setting buffer size
from jetty.xml

Just a thought on the delay aspect of this. Could it be the Nagle
Algorithm striking again? (Google for 'Nagle TCP').

Is this Jetty instance running on Windows (where I've seen Nagle-induced
delays of 30mS and more)?

There is a Jetty option on the sockets called something like noTcpDelay.
Try finding and setting it.

Beyond that there is usually an Operating System option changing the
Nagle delay - try changing that setting and watch to see if the delay
you observe changes.

On the fragmentation, are there any spurious flush() calls on the output
stream? Theat might cause fragmentation in your configuration.

Sorry I can't remember exact API details.

Chris



On Friday, April 25, 2008 at 9:03:44 AM, Krystian Szczesny wrote:
> Hi again,

>  

> Sizes of my header and response buffers  are set to what I desire,
> unfortunately settings don't reflect what's happening after the start
-
> or are not responsible for it.

> I'm still getting a fragmented response 4K+2K instead of one 6K :/

> Time between receiving first part and second part is over 60ms, which
is
> way too much.

>  

> Could you please help me a little and point out what should I change
to
> receive response in one part, not splitted?

>  

> Best regards,

> Krystian

>


---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email
This e-mail and any attachments are confidential and may also be legally
privileged and/or copyright material of Intec Telecom Systems PLC (or its
affiliated companies).  If you are not an intended or authorised recipient
of this e-mail or have received it in error, please delete it immediately
and notify the sender by e-mail. In such a case, reading, reproducing,
printing or further dissemination of this e-mail or its contents is strictly
prohibited and may be unlawful.
Intec Telecom Systems PLC does not represent or warrant that an attachment
hereto is free from computer viruses or other defects. The opinions
expressed in this e-mail and any attachments may be those of the author and
are not necessarily those of Intec Telecom Systems PLC.

---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email


Reply | Threaded
Open this post in threaded view
|

Re[4]: Re: [Jetty-support] Setting buffer size from jetty.xml

Chris Haynes
If I've understood your system configuration correctly, Jetty is acting as a kind of 'reverse proxy' - receiving client requests, forwarding them to your application on a different server. Your server then produces a response which is returned to Jetty, which passes it back to the client.

Can you tell from logs, instrumentation etc. exactly where the delay is happening? Is it before Jetty thinks it has received and passed on the response, or is it after Jetty thinks it has sent everything to the client?

I'm getting a little out-of-date on Jetty, since for the past four years my production system has simply run faultlessly; I just lurk here in case anything interesting happens - so apologies if I get this wrong, but my understang is this...

Chunking is used if Jetty does not know how much output is to be sent. It's all that it can do if it is not told what the content length is. It can't be 'turned off' unless an explicit content length can be supplied.

So that's why knowing exactly where the delay is occurring is so important. If it is while Jetty is returning the response to the client, then buffering that entire response, measuring its length and setting an explicit 'Content-Length' should improve things by avoiding the use of chunking and letting TCP do its stuff.

If, however, the delay is between your application sending the data and Jetty receiving it all, then a different approach will be needed.

HTH,

Chris


On Friday, April 25, 2008 at 11:11:42 AM, Krystian Szczesny wrote:
> I'm running it on Linux and HP-UX at the moment.
> I will try the noTcpDelay but I think disabling the chunking would be
> the best way.

> I am using Jetty to host Axis2 web services and all I am going to send
> are soap responses. I can more or less predict how big they are going to
> be [less than 30K].

> I've looked through Jetty docs and searched on google on how to disable
> chunking but was not able to find anything useful.

> Anyone?

> Best regards,
> Krystian

> -----Original Message-----
> From: Chris Haynes [mailto:[hidden email]]
> Sent: 25 April 2008 10:56
> To: Krystian Szczesny
> Subject: Re[2]: [jetty-user] Re: [Jetty-support] Setting buffer size
> from jetty.xml

> Just a thought on the delay aspect of this. Could it be the Nagle
> Algorithm striking again? (Google for 'Nagle TCP').

> Is this Jetty instance running on Windows (where I've seen Nagle-induced
> delays of 30mS and more)?

> There is a Jetty option on the sockets called something like noTcpDelay.
> Try finding and setting it.

> Beyond that there is usually an Operating System option changing the
> Nagle delay - try changing that setting and watch to see if the delay
> you observe changes.

> On the fragmentation, are there any spurious flush() calls on the output
> stream? Theat might cause fragmentation in your configuration.

> Sorry I can't remember exact API details.

> Chris



> On Friday, April 25, 2008 at 9:03:44 AM, Krystian Szczesny wrote:
>> Hi again,

>>  

>> Sizes of my header and response buffers  are set to what I desire,
>> unfortunately settings don't reflect what's happening after the start
> -
>> or are not responsible for it.

>> I'm still getting a fragmented response 4K+2K instead of one 6K :/

>> Time between receiving first part and second part is over 60ms, which
> is
>> way too much.

>>  

>> Could you please help me a little and point out what should I change
> to
>> receive response in one part, not splitted?

>>  

>> Best regards,

>> Krystian




> ---------------------------------------------------------------------
> To unsubscribe from this list, please visit:

>     http://xircles.codehaus.org/manage_email
> This e-mail and any attachments are confidential and may also be legally
> privileged and/or copyright material of Intec Telecom Systems PLC (or its
> affiliated companies).  If you are not an intended or authorised recipient
> of this e-mail or have received it in error, please delete it immediately
> and notify the sender by e-mail. In such a case, reading, reproducing,
> printing or further dissemination of this e-mail or its contents is strictly
> prohibited and may be unlawful.
> Intec Telecom Systems PLC does not represent or warrant that an attachment
> hereto is free from computer viruses or other defects. The opinions
> expressed in this e-mail and any attachments may be those of the author and
> are not necessarily those of Intec Telecom Systems PLC.

> ---------------------------------------------------------------------
> To unsubscribe from this list, please visit:

>     http://xircles.codehaus.org/manage_email



---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email


Reply | Threaded
Open this post in threaded view
|

RE: Re[4]: Re: [Jetty-support] Setting buffer size from jetty.xml

Krystian Szczesny
Hi Chris,
Thanks for an answer.

I've got jetty with Axis2 sitting on it, with some webservices that
basically get the clients request, converts them to a Map and sent them
to my system. My system processes the requests and returns a map, which
is converted to an xml message sent back to client by axis2.

I do know how long does it take in my system to process the request and
generate the response.

For example:

Processing by my system took 186ms
Between sending the request and receiving first chunk: 202ms
Between receiving first chunk and second chunk: 67ms

Total time: 269ms

Time difference between 186 and 202 is not a part of this problem. There
is debugging on and some code will be removed [like debug lines,
checking the time... this might potentially take few ms].

I am struggling with the time between first and second chunk.

I am sure Jetty's got the whole response from Axis2, so it should just
pass it to the client.

If there are more tests I can perform just let me know.

Best retards,
Krystian


> -----Original Message-----
> From: Chris Haynes [mailto:[hidden email]]
> Sent: 25 April 2008 11:58
> To: Krystian Szczesny
> Subject: Re[4]: [jetty-user] Re: [Jetty-support] Setting buffer size
> from jetty.xml
>
> If I've understood your system configuration correctly, Jetty is
acting
> as a kind of 'reverse proxy' - receiving client requests, forwarding
> them to your application on a different server. Your server then
> produces a response which is returned to Jetty, which passes it back
to
> the client.
>
> Can you tell from logs, instrumentation etc. exactly where the delay
is
> happening? Is it before Jetty thinks it has received and passed on the
> response, or is it after Jetty thinks it has sent everything to the
> client?
>
> I'm getting a little out-of-date on Jetty, since for the past four
> years my production system has simply run faultlessly; I just lurk
here

> in case anything interesting happens - so apologies if I get this
> wrong, but my understang is this...
>
> Chunking is used if Jetty does not know how much output is to be sent.
> It's all that it can do if it is not told what the content length is.
> It can't be 'turned off' unless an explicit content length can be
> supplied.
>
> So that's why knowing exactly where the delay is occurring is so
> important. If it is while Jetty is returning the response to the
> client, then buffering that entire response, measuring its length and
> setting an explicit 'Content-Length' should improve things by avoiding
> the use of chunking and letting TCP do its stuff.
>
> If, however, the delay is between your application sending the data
and

> Jetty receiving it all, then a different approach will be needed.
>
> HTH,
>
> Chris
>
>
> On Friday, April 25, 2008 at 11:11:42 AM, Krystian Szczesny wrote:
> > I'm running it on Linux and HP-UX at the moment.
> > I will try the noTcpDelay but I think disabling the chunking would
be
> > the best way.
>
> > I am using Jetty to host Axis2 web services and all I am going to
> send
> > are soap responses. I can more or less predict how big they are
going

> to
> > be [less than 30K].
>
> > I've looked through Jetty docs and searched on google on how to
> disable
> > chunking but was not able to find anything useful.
>
> > Anyone?
>
> > Best regards,
> > Krystian
>
> > -----Original Message-----
> > From: Chris Haynes [mailto:[hidden email]]
> > Sent: 25 April 2008 10:56
> > To: Krystian Szczesny
> > Subject: Re[2]: [jetty-user] Re: [Jetty-support] Setting buffer size
> > from jetty.xml
>
> > Just a thought on the delay aspect of this. Could it be the Nagle
> > Algorithm striking again? (Google for 'Nagle TCP').
>
> > Is this Jetty instance running on Windows (where I've seen Nagle-
> induced
> > delays of 30mS and more)?
>
> > There is a Jetty option on the sockets called something like
> noTcpDelay.
> > Try finding and setting it.
>
> > Beyond that there is usually an Operating System option changing the
> > Nagle delay - try changing that setting and watch to see if the
delay

> > you observe changes.
>
> > On the fragmentation, are there any spurious flush() calls on the
> output
> > stream? Theat might cause fragmentation in your configuration.
>
> > Sorry I can't remember exact API details.
>
> > Chris
>
>
>
> > On Friday, April 25, 2008 at 9:03:44 AM, Krystian Szczesny wrote:
> >> Hi again,
>
> >>
>
> >> Sizes of my header and response buffers  are set to what I desire,
> >> unfortunately settings don't reflect what's happening after the
> start
> > -
> >> or are not responsible for it.
>
> >> I'm still getting a fragmented response 4K+2K instead of one 6K :/
>
> >> Time between receiving first part and second part is over 60ms,
> which
> > is
> >> way too much.
>
> >>
>
> >> Could you please help me a little and point out what should I
change

> > to
> >> receive response in one part, not splitted?
>
> >>
>
> >> Best regards,
>
> >> Krystian
>
>
>
>
> >
---------------------------------------------------------------------
> > To unsubscribe from this list, please visit:
>
> >     http://xircles.codehaus.org/manage_email
> > This e-mail and any attachments are confidential and may also be
> legally
> > privileged and/or copyright material of Intec Telecom Systems PLC
(or

> its
> > affiliated companies).  If you are not an intended or authorised
> recipient
> > of this e-mail or have received it in error, please delete it
> immediately
> > and notify the sender by e-mail. In such a case, reading,
> reproducing,
> > printing or further dissemination of this e-mail or its contents is
> strictly
> > prohibited and may be unlawful.
> > Intec Telecom Systems PLC does not represent or warrant that an
> attachment
> > hereto is free from computer viruses or other defects. The opinions
> > expressed in this e-mail and any attachments may be those of the
> author and
> > are not necessarily those of Intec Telecom Systems PLC.
>
> >
---------------------------------------------------------------------

> > To unsubscribe from this list, please visit:
>
> >     http://xircles.codehaus.org/manage_email
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe from this list, please visit:
>
>     http://xircles.codehaus.org/manage_email
>


---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email


Reply | Threaded
Open this post in threaded view
|

Re[6]: Re: [Jetty-support] Setting buffer size from jetty.xml

Chris Haynes
I suggest you investigate the timing within Jetty when handling the response.

From the description you present, there are five possible delay areas:

1) For Axis 2 to receive the response from your system,

2) For it to convert it into XML,

3) For it to push and flush the entire response into Jetty

4) For Jetty to hand the entire response down into the TCP layer of your server

5) For the TCP layer to send it to the client

Now that 67ms delay between chunks could be being caused by any of 3, 4 or 5 (assuming the network itself is instantaneous, or at least has the same insertion delay for every chunk).

Try synchronising the clocks on the three machines involved and turning on jetty debugging, which should give you the precice times at which 3 & 4 happen, and let you see exactly where the delay is.

Chris



On Friday, April 25, 2008 at 1:40:12 PM, Krystian Szczesny wrote:
> Hi Chris,
> Thanks for an answer.

> I've got jetty with Axis2 sitting on it, with some webservices that
> basically get the clients request, converts them to a Map and sent them
> to my system. My system processes the requests and returns a map, which
> is converted to an xml message sent back to client by axis2.

> I do know how long does it take in my system to process the request and
> generate the response.

> For example:

> Processing by my system took 186ms
> Between sending the request and receiving first chunk: 202ms
> Between receiving first chunk and second chunk: 67ms

> Total time: 269ms

> Time difference between 186 and 202 is not a part of this problem. There
> is debugging on and some code will be removed [like debug lines,
> checking the time... this might potentially take few ms].

> I am struggling with the time between first and second chunk.

> I am sure Jetty's got the whole response from Axis2, so it should just
> pass it to the client.

> If there are more tests I can perform just let me know.

> Best retards,
> Krystian


>> -----Original Message-----
>> From: Chris Haynes [mailto:[hidden email]]
>> Sent: 25 April 2008 11:58
>> To: Krystian Szczesny
>> Subject: Re[4]: [jetty-user] Re: [Jetty-support] Setting buffer size
>> from jetty.xml

>> If I've understood your system configuration correctly, Jetty is
> acting
>> as a kind of 'reverse proxy' - receiving client requests, forwarding
>> them to your application on a different server. Your server then
>> produces a response which is returned to Jetty, which passes it back
> to
>> the client.

>> Can you tell from logs, instrumentation etc. exactly where the delay
> is
>> happening? Is it before Jetty thinks it has received and passed on the
>> response, or is it after Jetty thinks it has sent everything to the
>> client?

>> I'm getting a little out-of-date on Jetty, since for the past four
>> years my production system has simply run faultlessly; I just lurk
> here
>> in case anything interesting happens - so apologies if I get this
>> wrong, but my understang is this...

>> Chunking is used if Jetty does not know how much output is to be sent.
>> It's all that it can do if it is not told what the content length is.
>> It can't be 'turned off' unless an explicit content length can be
>> supplied.

>> So that's why knowing exactly where the delay is occurring is so
>> important. If it is while Jetty is returning the response to the
>> client, then buffering that entire response, measuring its length and
>> setting an explicit 'Content-Length' should improve things by avoiding
>> the use of chunking and letting TCP do its stuff.

>> If, however, the delay is between your application sending the data
> and
>> Jetty receiving it all, then a different approach will be needed.

>> HTH,

>> Chris


>> On Friday, April 25, 2008 at 11:11:42 AM, Krystian Szczesny wrote:
>> > I'm running it on Linux and HP-UX at the moment.
>> > I will try the noTcpDelay but I think disabling the chunking would
> be
>> > the best way.

>> > I am using Jetty to host Axis2 web services and all I am going to
>> send
>> > are soap responses. I can more or less predict how big they are
> going
>> to
>> > be [less than 30K].

>> > I've looked through Jetty docs and searched on google on how to
>> disable
>> > chunking but was not able to find anything useful.

>> > Anyone?

>> > Best regards,
>> > Krystian

>> > -----Original Message-----
>> > From: Chris Haynes [mailto:[hidden email]]
>> > Sent: 25 April 2008 10:56
>> > To: Krystian Szczesny
>> > Subject: Re[2]: [jetty-user] Re: [Jetty-support] Setting buffer size
>> > from jetty.xml

>> > Just a thought on the delay aspect of this. Could it be the Nagle
>> > Algorithm striking again? (Google for 'Nagle TCP').

>> > Is this Jetty instance running on Windows (where I've seen Nagle-
>> induced
>> > delays of 30mS and more)?

>> > There is a Jetty option on the sockets called something like
>> noTcpDelay.
>> > Try finding and setting it.

>> > Beyond that there is usually an Operating System option changing the
>> > Nagle delay - try changing that setting and watch to see if the
> delay
>> > you observe changes.

>> > On the fragmentation, are there any spurious flush() calls on the
>> output
>> > stream? Theat might cause fragmentation in your configuration.

>> > Sorry I can't remember exact API details.

>> > Chris



>> > On Friday, April 25, 2008 at 9:03:44 AM, Krystian Szczesny wrote:
>> >> Hi again,

>> >>

>> >> Sizes of my header and response buffers  are set to what I desire,
>> >> unfortunately settings don't reflect what's happening after the
>> start
>> > -
>> >> or are not responsible for it.

>> >> I'm still getting a fragmented response 4K+2K instead of one 6K :/

>> >> Time between receiving first part and second part is over 60ms,
>> which
>> > is
>> >> way too much.

>> >>

>> >> Could you please help me a little and point out what should I
> change
>> > to
>> >> receive response in one part, not splitted?

>> >>

>> >> Best regards,

>> >> Krystian




>> >
> ---------------------------------------------------------------------
>> > To unsubscribe from this list, please visit:

>> >     http://xircles.codehaus.org/manage_email
>> > This e-mail and any attachments are confidential and may also be
>> legally
>> > privileged and/or copyright material of Intec Telecom Systems PLC
> (or
>> its
>> > affiliated companies).  If you are not an intended or authorised
>> recipient
>> > of this e-mail or have received it in error, please delete it
>> immediately
>> > and notify the sender by e-mail. In such a case, reading,
>> reproducing,
>> > printing or further dissemination of this e-mail or its contents is
>> strictly
>> > prohibited and may be unlawful.
>> > Intec Telecom Systems PLC does not represent or warrant that an
>> attachment
>> > hereto is free from computer viruses or other defects. The opinions
>> > expressed in this e-mail and any attachments may be those of the
>> author and
>> > are not necessarily those of Intec Telecom Systems PLC.

>> >
> ---------------------------------------------------------------------
>> > To unsubscribe from this list, please visit:

>> >     http://xircles.codehaus.org/manage_email



>> ---------------------------------------------------------------------
>> To unsubscribe from this list, please visit:

>>     http://xircles.codehaus.org/manage_email



> ---------------------------------------------------------------------
> To unsubscribe from this list, please visit:

>     http://xircles.codehaus.org/manage_email



---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email


Reply | Threaded
Open this post in threaded view
|

RE: Re[6]: Re: [Jetty-support] Setting buffer size from jetty.xml

Krystian Szczesny
Well I am not able to perform every test now to receive the times but...
I know that between time Jetty receives the whole message and time
client receives the WHOLE message is about 100ms. Time between Jetty
receives the whole message and time client receives the first chunk is
just few ms.
So.. I would like a way to force jetty not to use chunking or to use
Http/1.0 protocol but I was not able to find any information about it.

Is there any way to do so?

I am forcing Axis2 to send HTTP/1.0 response to Jetty [or at least I
think so - that's what Axis2 doc. says but I'm also posting to axis-user
mailing list], but Jetty sends it as 1.1 with chunking enabled.

Best regards,
Krystian

> -----Original Message-----
> From: Chris Haynes [mailto:[hidden email]]
> Sent: 25 April 2008 15:04
> To: Krystian Szczesny
> Subject: Re[6]: [jetty-user] Re: [Jetty-support] Setting buffer size
> from jetty.xml
>
> I suggest you investigate the timing within Jetty when handling the
> response.
>
> From the description you present, there are five possible delay areas:
>
> 1) For Axis 2 to receive the response from your system,
>
> 2) For it to convert it into XML,
>
> 3) For it to push and flush the entire response into Jetty
>
> 4) For Jetty to hand the entire response down into the TCP layer of
> your server
>
> 5) For the TCP layer to send it to the client
>
> Now that 67ms delay between chunks could be being caused by any of 3,
4
> or 5 (assuming the network itself is instantaneous, or at least has
the
> same insertion delay for every chunk).
>
> Try synchronising the clocks on the three machines involved and
turning

> on jetty debugging, which should give you the precice times at which 3
> & 4 happen, and let you see exactly where the delay is.
>
> Chris
>
>
>
> On Friday, April 25, 2008 at 1:40:12 PM, Krystian Szczesny wrote:
> > Hi Chris,
> > Thanks for an answer.
>
> > I've got jetty with Axis2 sitting on it, with some webservices that
> > basically get the clients request, converts them to a Map and sent
> them
> > to my system. My system processes the requests and returns a map,
> which
> > is converted to an xml message sent back to client by axis2.
>
> > I do know how long does it take in my system to process the request
> and
> > generate the response.
>
> > For example:
>
> > Processing by my system took 186ms
> > Between sending the request and receiving first chunk: 202ms
> > Between receiving first chunk and second chunk: 67ms
>
> > Total time: 269ms
>
> > Time difference between 186 and 202 is not a part of this problem.
> There
> > is debugging on and some code will be removed [like debug lines,
> > checking the time... this might potentially take few ms].
>
> > I am struggling with the time between first and second chunk.
>
> > I am sure Jetty's got the whole response from Axis2, so it should
> just
> > pass it to the client.
>
> > If there are more tests I can perform just let me know.
>
> > Best retards,
> > Krystian
>
>
> >> -----Original Message-----
> >> From: Chris Haynes [mailto:[hidden email]]
> >> Sent: 25 April 2008 11:58
> >> To: Krystian Szczesny
> >> Subject: Re[4]: [jetty-user] Re: [Jetty-support] Setting buffer
size
> >> from jetty.xml
>
> >> If I've understood your system configuration correctly, Jetty is
> > acting
> >> as a kind of 'reverse proxy' - receiving client requests,
forwarding
> >> them to your application on a different server. Your server then
> >> produces a response which is returned to Jetty, which passes it
back
> > to
> >> the client.
>
> >> Can you tell from logs, instrumentation etc. exactly where the
delay

> > is
> >> happening? Is it before Jetty thinks it has received and passed on
> the
> >> response, or is it after Jetty thinks it has sent everything to the
> >> client?
>
> >> I'm getting a little out-of-date on Jetty, since for the past four
> >> years my production system has simply run faultlessly; I just lurk
> > here
> >> in case anything interesting happens - so apologies if I get this
> >> wrong, but my understang is this...
>
> >> Chunking is used if Jetty does not know how much output is to be
> sent.
> >> It's all that it can do if it is not told what the content length
> is.
> >> It can't be 'turned off' unless an explicit content length can be
> >> supplied.
>
> >> So that's why knowing exactly where the delay is occurring is so
> >> important. If it is while Jetty is returning the response to the
> >> client, then buffering that entire response, measuring its length
> and
> >> setting an explicit 'Content-Length' should improve things by
> avoiding
> >> the use of chunking and letting TCP do its stuff.
>
> >> If, however, the delay is between your application sending the data
> > and
> >> Jetty receiving it all, then a different approach will be needed.
>
> >> HTH,
>
> >> Chris
>
>
> >> On Friday, April 25, 2008 at 11:11:42 AM, Krystian Szczesny wrote:
> >> > I'm running it on Linux and HP-UX at the moment.
> >> > I will try the noTcpDelay but I think disabling the chunking
would

> > be
> >> > the best way.
>
> >> > I am using Jetty to host Axis2 web services and all I am going to
> >> send
> >> > are soap responses. I can more or less predict how big they are
> > going
> >> to
> >> > be [less than 30K].
>
> >> > I've looked through Jetty docs and searched on google on how to
> >> disable
> >> > chunking but was not able to find anything useful.
>
> >> > Anyone?
>
> >> > Best regards,
> >> > Krystian
>
> >> > -----Original Message-----
> >> > From: Chris Haynes [mailto:[hidden email]]
> >> > Sent: 25 April 2008 10:56
> >> > To: Krystian Szczesny
> >> > Subject: Re[2]: [jetty-user] Re: [Jetty-support] Setting buffer
> size
> >> > from jetty.xml
>
> >> > Just a thought on the delay aspect of this. Could it be the Nagle
> >> > Algorithm striking again? (Google for 'Nagle TCP').
>
> >> > Is this Jetty instance running on Windows (where I've seen Nagle-
> >> induced
> >> > delays of 30mS and more)?
>
> >> > There is a Jetty option on the sockets called something like
> >> noTcpDelay.
> >> > Try finding and setting it.
>
> >> > Beyond that there is usually an Operating System option changing
> the
> >> > Nagle delay - try changing that setting and watch to see if the
> > delay
> >> > you observe changes.
>
> >> > On the fragmentation, are there any spurious flush() calls on the
> >> output
> >> > stream? Theat might cause fragmentation in your configuration.
>
> >> > Sorry I can't remember exact API details.
>
> >> > Chris
>
>
>
> >> > On Friday, April 25, 2008 at 9:03:44 AM, Krystian Szczesny wrote:
> >> >> Hi again,
>
> >> >>
>
> >> >> Sizes of my header and response buffers  are set to what I
> desire,
> >> >> unfortunately settings don't reflect what's happening after the
> >> start
> >> > -
> >> >> or are not responsible for it.
>
> >> >> I'm still getting a fragmented response 4K+2K instead of one 6K
> :/
>
> >> >> Time between receiving first part and second part is over 60ms,
> >> which
> >> > is
> >> >> way too much.
>
> >> >>
>
> >> >> Could you please help me a little and point out what should I
> > change
> >> > to
> >> >> receive response in one part, not splitted?
>
> >> >>
>
> >> >> Best regards,
>
> >> >> Krystian
>
>
>
>
> >> >
> >
---------------------------------------------------------------------

> >> > To unsubscribe from this list, please visit:
>
> >> >     http://xircles.codehaus.org/manage_email
> >> > This e-mail and any attachments are confidential and may also be
> >> legally
> >> > privileged and/or copyright material of Intec Telecom Systems PLC
> > (or
> >> its
> >> > affiliated companies).  If you are not an intended or authorised
> >> recipient
> >> > of this e-mail or have received it in error, please delete it
> >> immediately
> >> > and notify the sender by e-mail. In such a case, reading,
> >> reproducing,
> >> > printing or further dissemination of this e-mail or its contents
> is
> >> strictly
> >> > prohibited and may be unlawful.
> >> > Intec Telecom Systems PLC does not represent or warrant that an
> >> attachment
> >> > hereto is free from computer viruses or other defects. The
> opinions
> >> > expressed in this e-mail and any attachments may be those of the
> >> author and
> >> > are not necessarily those of Intec Telecom Systems PLC.
>
> >> >
> >
---------------------------------------------------------------------
> >> > To unsubscribe from this list, please visit:
>
> >> >     http://xircles.codehaus.org/manage_email
>
>
>
> >>
--------------------------------------------------------------------
> -
> >> To unsubscribe from this list, please visit:
>
> >>     http://xircles.codehaus.org/manage_email
>
>
>
> >
---------------------------------------------------------------------

> > To unsubscribe from this list, please visit:
>
> >     http://xircles.codehaus.org/manage_email
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe from this list, please visit:
>
>     http://xircles.codehaus.org/manage_email
>


---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email


Reply | Threaded
Open this post in threaded view
|

RE: Re[6]: Re: [Jetty-support] Setting buffer size from jetty.xml

Krystian Szczesny
I've used a tool to sent a custom header and I set It to be 1.0.
Result: no chunking, fast response!

Could you please tell me how to disable chunking in Jetty with Http/1.1?
I am not sure if I can use 1.0 [right now waiting for response from the
'client side'], but forcing 1.1 not to send chunked responses looks like
a perfect fix for my issue.

Best regards,
Krystian

> -----Original Message-----
> From: Krystian Szczesny [mailto:[hidden email]]
> Sent: 28 April 2008 12:22
> To: [hidden email]
> Subject: RE: Re[6]: [jetty-user] Re: [Jetty-support] Setting buffer
> size from jetty.xml
>
> Well I am not able to perform every test now to receive the times
> but...
> I know that between time Jetty receives the whole message and time
> client receives the WHOLE message is about 100ms. Time between Jetty
> receives the whole message and time client receives the first chunk is
> just few ms.
> So.. I would like a way to force jetty not to use chunking or to use
> Http/1.0 protocol but I was not able to find any information about it.
>
> Is there any way to do so?
>
> I am forcing Axis2 to send HTTP/1.0 response to Jetty [or at least I
> think so - that's what Axis2 doc. says but I'm also posting to axis-
> user
> mailing list], but Jetty sends it as 1.1 with chunking enabled.
>
> Best regards,
> Krystian
> > -----Original Message-----
> > From: Chris Haynes [mailto:[hidden email]]
> > Sent: 25 April 2008 15:04
> > To: Krystian Szczesny
> > Subject: Re[6]: [jetty-user] Re: [Jetty-support] Setting buffer size
> > from jetty.xml
> >
> > I suggest you investigate the timing within Jetty when handling the
> > response.
> >
> > From the description you present, there are five possible delay
> areas:
> >
> > 1) For Axis 2 to receive the response from your system,
> >
> > 2) For it to convert it into XML,
> >
> > 3) For it to push and flush the entire response into Jetty
> >
> > 4) For Jetty to hand the entire response down into the TCP layer of
> > your server
> >
> > 5) For the TCP layer to send it to the client
> >
> > Now that 67ms delay between chunks could be being caused by any of
3,

> 4
> > or 5 (assuming the network itself is instantaneous, or at least has
> the
> > same insertion delay for every chunk).
> >
> > Try synchronising the clocks on the three machines involved and
> turning
> > on jetty debugging, which should give you the precice times at which
> 3
> > & 4 happen, and let you see exactly where the delay is.
> >
> > Chris
> >
> >
> >
> > On Friday, April 25, 2008 at 1:40:12 PM, Krystian Szczesny wrote:
> > > Hi Chris,
> > > Thanks for an answer.
> >
> > > I've got jetty with Axis2 sitting on it, with some webservices
that
> > > basically get the clients request, converts them to a Map and sent
> > them
> > > to my system. My system processes the requests and returns a map,
> > which
> > > is converted to an xml message sent back to client by axis2.
> >
> > > I do know how long does it take in my system to process the
request

> > and
> > > generate the response.
> >
> > > For example:
> >
> > > Processing by my system took 186ms
> > > Between sending the request and receiving first chunk: 202ms
> > > Between receiving first chunk and second chunk: 67ms
> >
> > > Total time: 269ms
> >
> > > Time difference between 186 and 202 is not a part of this problem.
> > There
> > > is debugging on and some code will be removed [like debug lines,
> > > checking the time... this might potentially take few ms].
> >
> > > I am struggling with the time between first and second chunk.
> >
> > > I am sure Jetty's got the whole response from Axis2, so it should
> > just
> > > pass it to the client.
> >
> > > If there are more tests I can perform just let me know.
> >
> > > Best retards,
> > > Krystian
> >
> >
> > >> -----Original Message-----
> > >> From: Chris Haynes [mailto:[hidden email]]
> > >> Sent: 25 April 2008 11:58
> > >> To: Krystian Szczesny
> > >> Subject: Re[4]: [jetty-user] Re: [Jetty-support] Setting buffer
> size
> > >> from jetty.xml
> >
> > >> If I've understood your system configuration correctly, Jetty is
> > > acting
> > >> as a kind of 'reverse proxy' - receiving client requests,
> forwarding
> > >> them to your application on a different server. Your server then
> > >> produces a response which is returned to Jetty, which passes it
> back
> > > to
> > >> the client.
> >
> > >> Can you tell from logs, instrumentation etc. exactly where the
> delay
> > > is
> > >> happening? Is it before Jetty thinks it has received and passed
on
> > the
> > >> response, or is it after Jetty thinks it has sent everything to
> the
> > >> client?
> >
> > >> I'm getting a little out-of-date on Jetty, since for the past
four
> > >> years my production system has simply run faultlessly; I just
lurk

> > > here
> > >> in case anything interesting happens - so apologies if I get this
> > >> wrong, but my understang is this...
> >
> > >> Chunking is used if Jetty does not know how much output is to be
> > sent.
> > >> It's all that it can do if it is not told what the content length
> > is.
> > >> It can't be 'turned off' unless an explicit content length can be
> > >> supplied.
> >
> > >> So that's why knowing exactly where the delay is occurring is so
> > >> important. If it is while Jetty is returning the response to the
> > >> client, then buffering that entire response, measuring its length
> > and
> > >> setting an explicit 'Content-Length' should improve things by
> > avoiding
> > >> the use of chunking and letting TCP do its stuff.
> >
> > >> If, however, the delay is between your application sending the
> data
> > > and
> > >> Jetty receiving it all, then a different approach will be needed.
> >
> > >> HTH,
> >
> > >> Chris
> >
> >
> > >> On Friday, April 25, 2008 at 11:11:42 AM, Krystian Szczesny
wrote:

> > >> > I'm running it on Linux and HP-UX at the moment.
> > >> > I will try the noTcpDelay but I think disabling the chunking
> would
> > > be
> > >> > the best way.
> >
> > >> > I am using Jetty to host Axis2 web services and all I am going
> to
> > >> send
> > >> > are soap responses. I can more or less predict how big they are
> > > going
> > >> to
> > >> > be [less than 30K].
> >
> > >> > I've looked through Jetty docs and searched on google on how to
> > >> disable
> > >> > chunking but was not able to find anything useful.
> >
> > >> > Anyone?
> >
> > >> > Best regards,
> > >> > Krystian
> >
> > >> > -----Original Message-----
> > >> > From: Chris Haynes [mailto:[hidden email]]
> > >> > Sent: 25 April 2008 10:56
> > >> > To: Krystian Szczesny
> > >> > Subject: Re[2]: [jetty-user] Re: [Jetty-support] Setting buffer
> > size
> > >> > from jetty.xml
> >
> > >> > Just a thought on the delay aspect of this. Could it be the
> Nagle
> > >> > Algorithm striking again? (Google for 'Nagle TCP').
> >
> > >> > Is this Jetty instance running on Windows (where I've seen
> Nagle-
> > >> induced
> > >> > delays of 30mS and more)?
> >
> > >> > There is a Jetty option on the sockets called something like
> > >> noTcpDelay.
> > >> > Try finding and setting it.
> >
> > >> > Beyond that there is usually an Operating System option
changing

> > the
> > >> > Nagle delay - try changing that setting and watch to see if the
> > > delay
> > >> > you observe changes.
> >
> > >> > On the fragmentation, are there any spurious flush() calls on
> the
> > >> output
> > >> > stream? Theat might cause fragmentation in your configuration.
> >
> > >> > Sorry I can't remember exact API details.
> >
> > >> > Chris
> >
> >
> >
> > >> > On Friday, April 25, 2008 at 9:03:44 AM, Krystian Szczesny
> wrote:
> > >> >> Hi again,
> >
> > >> >>
> >
> > >> >> Sizes of my header and response buffers  are set to what I
> > desire,
> > >> >> unfortunately settings don't reflect what's happening after
the
> > >> start
> > >> > -
> > >> >> or are not responsible for it.
> >
> > >> >> I'm still getting a fragmented response 4K+2K instead of one
6K
> > :/
> >
> > >> >> Time between receiving first part and second part is over
60ms,

> > >> which
> > >> > is
> > >> >> way too much.
> >
> > >> >>
> >
> > >> >> Could you please help me a little and point out what should I
> > > change
> > >> > to
> > >> >> receive response in one part, not splitted?
> >
> > >> >>
> >
> > >> >> Best regards,
> >
> > >> >> Krystian
> >
> >
> >
> >
> > >> >
> > >
> ---------------------------------------------------------------------
> > >> > To unsubscribe from this list, please visit:
> >
> > >> >     http://xircles.codehaus.org/manage_email
> > >> > This e-mail and any attachments are confidential and may also
be
> > >> legally
> > >> > privileged and/or copyright material of Intec Telecom Systems
> PLC
> > > (or
> > >> its
> > >> > affiliated companies).  If you are not an intended or
authorised
> > >> recipient
> > >> > of this e-mail or have received it in error, please delete it
> > >> immediately
> > >> > and notify the sender by e-mail. In such a case, reading,
> > >> reproducing,
> > >> > printing or further dissemination of this e-mail or its
contents
> > is
> > >> strictly
> > >> > prohibited and may be unlawful.
> > >> > Intec Telecom Systems PLC does not represent or warrant that an
> > >> attachment
> > >> > hereto is free from computer viruses or other defects. The
> > opinions
> > >> > expressed in this e-mail and any attachments may be those of
the

> > >> author and
> > >> > are not necessarily those of Intec Telecom Systems PLC.
> >
> > >> >
> > >
> ---------------------------------------------------------------------
> > >> > To unsubscribe from this list, please visit:
> >
> > >> >     http://xircles.codehaus.org/manage_email
> >
> >
> >
> > >>
> --------------------------------------------------------------------
> > -
> > >> To unsubscribe from this list, please visit:
> >
> > >>     http://xircles.codehaus.org/manage_email
> >
> >
> >
> > >
> ---------------------------------------------------------------------
> > > To unsubscribe from this list, please visit:
> >
> > >     http://xircles.codehaus.org/manage_email
> >
> >
> >
> >
---------------------------------------------------------------------

> > To unsubscribe from this list, please visit:
> >
> >     http://xircles.codehaus.org/manage_email
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe from this list, please visit:
>
>     http://xircles.codehaus.org/manage_email
>


---------------------------------------------------------------------
To unsubscribe from this list, please visit:

    http://xircles.codehaus.org/manage_email