We've been doing performance/stress testing on Jetty's cometd. We observed that when we test the deliver feature of cometd - for eg - have 5000 HTTP bayeux clients connecting/subscribing to the Jetty cometd server and the server delivering to these clients at 3000notification/second. (Here, for performance testing- we deliver the same message to all the clients on same channel) - the jvm on server side goes out of java heap space after running for 20-24 hours.
We increased the maxIdleTime, lowResourcesConnections and lowResourcesMaxIdleTime to support that many connections.
We wanted to know if this would be a right approach.
<init-param> <param-name>timeout</param-name> <param-value>12000000</param-value> </init-param> <init-param> <param-name>interval</param-name> <param-value>0</param-value> </init-param> <init-param> <param-name>maxInterval</param-name> <param-value>100000000</param-value> </init-param> We faced the problem of clients disconnecting when we had default/lower numbers for timeout and maxInterval. So we played around with it little bit and increasing those params resolved the issue.
After this analysis - we are not able to figure out why the memory goes out of bound or is it something related to huge numbers for the params in jetty.xml and web.xml?