Problem behind reverse proxy - long-polling infinite loop request spam

Hi,

using jmix 2.0.2

When I run jmix app behind reverse proxy (nginx - no extra config). Then I have problem with websockets (atleast I think). When I open app in browser and use it (open screen with some data), then frontend will start produce kind of extreme request spaming in infinite loop (approximatly 1500-2000 req/m) with calling /VAADIN/push endpoint.

Just before the request spam starts I can see this exception in log:

2023-10-09T14:19:29.215Z  WARN 1 --- [sphere-Shared-0] org.atmosphere.cpr.DefaultBroadcaster    : This message Entry{message=com.vaadin.flow.server.communication.AtmospherePushConnection$PushMessage@5ea4672, type=RESOURCE, future=org.atmosphere.cpr.BroadcasterFuture@744b6e42} will be lost
2023-10-09T14:19:29.215Z  INFO 1 --- [nio-7003-exec-1] c.g.d.portal.security.HttpLoggingFilter  : [jakub.cech@generalbytes.com] HTTP/1.0 200 - 1 ms - GET http://portal.gii.cz/admin/VAADIN/push
2023-10-09T14:19:29.215Z  WARN 1 --- [sphere-Shared-0] org.atmosphere.cpr.DefaultBroadcaster    : Failed to submit broadcast handler runnable to for Broadcaster/*

java.lang.IllegalStateException: The request object has been recycled and is no longer associated with this facade
	at org.apache.catalina.connector.RequestFacade.checkFacade(RequestFacade.java:856) ~[tomcat-embed-core-10.1.8.jar:10.1.8]
	at org.apache.catalina.connector.RequestFacade.getHeader(RequestFacade.java:505) ~[tomcat-embed-core-10.1.8.jar:10.1.8]
	at jakarta.servlet.http.HttpServletRequestWrapper.getHeader(HttpServletRequestWrapper.java:82) ~[tomcat-embed-core-10.1.8.jar:6.0]
	at org.springframework.security.web.firewall.StrictHttpFirewall$StrictFirewalledRequest.getHeader(StrictHttpFirewall.java:714) ~[spring-security-web-6.1.0.jar:6.1.0]
	at jakarta.servlet.http.HttpServletRequestWrapper.getHeader(HttpServletRequestWrapper.java:82) ~[tomcat-embed-core-10.1.8.jar:6.0]
	at jakarta.servlet.http.HttpServletRequestWrapper.getHeader(HttpServletRequestWrapper.java:82) ~[tomcat-embed-core-10.1.8.jar:6.0]
	at jakarta.servlet.http.HttpServletRequestWrapper.getHeader(HttpServletRequestWrapper.java:82) ~[tomcat-embed-core-10.1.8.jar:6.0]
	at jakarta.servlet.http.HttpServletRequestWrapper.getHeader(HttpServletRequestWrapper.java:82) ~[tomcat-embed-core-10.1.8.jar:6.0]
	at jakarta.servlet.http.HttpServletRequestWrapper.getHeader(HttpServletRequestWrapper.java:82) ~[tomcat-embed-core-10.1.8.jar:6.0]
	at org.atmosphere.cpr.AtmosphereRequestImpl.getHeader(AtmosphereRequestImpl.java:351) ~[atmosphere-runtime-3.0.3.slf4jvaadin1.jar:3.0.3.slf4jvaadin1]
	at org.atmosphere.cpr.AtmosphereRequestImpl.getHeader(AtmosphereRequestImpl.java:336) ~[atmosphere-runtime-3.0.3.slf4jvaadin1.jar:3.0.3.slf4jvaadin1]
	at com.vaadin.flow.server.communication.LongPollingCacheFilter.filter(LongPollingCacheFilter.java:56) ~[flow-server-24.0.9.jar:24.0.9]
	at org.atmosphere.cpr.BroadcasterConfig.filter(BroadcasterConfig.java:452) ~[atmosphere-runtime-3.0.3.slf4jvaadin1.jar:3.0.3.slf4jvaadin1]
	at org.atmosphere.cpr.DefaultBroadcaster.perRequestFilter(DefaultBroadcaster.java:818) ~[atmosphere-runtime-3.0.3.slf4jvaadin1.jar:3.0.3.slf4jvaadin1]
	at org.atmosphere.cpr.DefaultBroadcaster.deliverPush(DefaultBroadcaster.java:691) ~[atmosphere-runtime-3.0.3.slf4jvaadin1.jar:3.0.3.slf4jvaadin1]
	at org.atmosphere.cpr.DefaultBroadcaster.push(DefaultBroadcaster.java:568) ~[atmosphere-runtime-3.0.3.slf4jvaadin1.jar:3.0.3.slf4jvaadin1]
	at org.atmosphere.cpr.DefaultBroadcaster$1.run(DefaultBroadcaster.java:423) ~[atmosphere-runtime-3.0.3.slf4jvaadin1.jar:3.0.3.slf4jvaadin1]
	at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source) ~[na:na]
	at java.base/java.util.concurrent.FutureTask.run(Unknown Source) ~[na:na]
	at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(Unknown Source) ~[na:na]
	at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source) ~[na:na]
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source) ~[na:na]
	at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source) ~[na:na]
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) ~[na:na]
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source) ~[na:na]

I identified that request before this is websocket req.
wss://SERVER_URL/VAADIN/push?v-r=push&v-uiId=0&v-pushId=09a32364-236b-46e5-b452-8df88c02d3e9&X-Atmosphere-tracking-id=0&X-Atmosphere-Framework=3.1.2-javascript&X-Atmosphere-Transport=websocket&X-Atmosphere-TrackMessageSize=true&Content-Type=application/json;%20charset=UTF-8&X-atmo-protocol=true&X-Vaadin-LastSeenServerSyncId=3

then subsequent requests are calling http :
https://SERVER_URL/VAADIN/push?v-r=push&v-uiId=0&v-pushId=09a32364-236b-46e5-b452-8df88c02d3e9&X-Atmosphere-tracking-id=fe73b1f0-5cae-4af9-a449-60a75dc7fbd7&X-Atmosphere-Framework=3.1.2-javascript&X-Atmosphere-Transport=long-polling&X-Atmosphere-TrackMessageSize=true&Content-Type=application%2Fjson%3B%20charset%3DUTF-8&X-atmo-protocol=true&X-Vaadin-LastSeenServerSyncId=20&_=1696929846697

Exception appears in log sometimes only in start, sometimes with i guess every request, sometimes just randomly.

When I enable debug logs, I see these lines logged for every request:

2023-10-10T10:32:36.736Z DEBUG 1 --- [nio-7003-exec-3] nServletConfiguration$RootExcludeHandler : Mapped to org.springframework.web.servlet.mvc.ServletForwardingController@b52bc7b
2023-10-10T10:32:36.739Z DEBUG 1 --- [nio-7003-exec-3] c.v.f.server.communication.PushHandler   : New push connection for resource ee7998c4-2a4c-4759-b272-e60637b91e91 with transport LONG_POLLING
2023-10-10T10:32:36.741Z DEBUG 1 --- [nio-7003-exec-3] org.atmosphere.cpr.DefaultBroadcaster    : Sending cached message [com.vaadin.flow.server.communication.AtmospherePushConnection$PushMessage@643a43ec, com.vaadin.flow.server.communication.AtmospherePushConnection$PushMessage@36e03b28, com.vaadin.flow.server.communication.AtmospherePushConnection$PushMessage@406319d6, com.vaadin.flow.server.communication.AtmospherePushConnection$PushMessage@218d4d99] to ee7998c4-2a4c-4759-b272-e60637b91e91
2023-10-10T10:32:36.743Z DEBUG 1 --- [nio-7003-exec-3] o.atmosphere.cpr.AtmosphereResourceImpl  : Cached message found, not suspending ee7998c4-2a4c-4759-b272-e60637b91e91
2023-10-10T10:32:36.743Z DEBUG 1 --- [nio-7003-exec-3] c.v.f.s.c.AtmospherePushConnection       : Disconnection already in progress, ignoring request
2023-10-10T10:32:36.743Z DEBUG 1 --- [nio-7003-exec-3] c.v.f.server.communication.UidlWriter    : * Creating response to client
2023-10-10T10:32:36.744Z DEBUG 1 --- [nio-7003-exec-3] c.v.f.server.communication.UidlWriter    : * Creating response to client

This does not normally happen on localhost. But I was able identify that, when I change @Push annotation to @Push(transport = Transport.LONG_POLLING) then the same is happening even on localhost running from IDE. And it is instantly when opening webapp with request rate around 10 000 req/m as it is on localhost with better HW.

Exception is appearing too, so I think it doesnt have anything to do with websocket problem.

So I think there is some problem with websockets causing to switch to long-polling transport method (if that is possible?). But cant tell why it is happening . And then there is probably some problem with long-polling.

I just found some hint about configuring nginx timeouts, but that did not work. Here is my current config:

server {
...
        client_max_body_size 20M;

   
        location /admin/ {
                proxy_set_header Host $host;

                proxy_set_header X-Forwarded-Server $host;
                proxy_set_header X-Forwarded-Proto  $scheme;
                proxy_set_header X-Forwarded-For    $proxy_add_x_forwarded_for;
                proxy_set_header X-Real-IP $remote_addr;

                proxy_pass http://localhost:7003;

                proxy_read_timeout 3600;
                proxy_connect_timeout 240;
                proxy_http_version 1.1;
        }


        proxy_read_timeout 600;
        proxy_connect_timeout 600;
        proxy_send_timeout 600;

...

}

I did not find relevant config for jmix or vaadin. I struggling with this for weeks now and I cant find solution. I am not even sure what is problem. I dont care about websockets/long-polling per say, maybe there is completly different problem that causes this request spamming. Can you help me with that?

Thank you,
Jakub

1 Like

Any feedback on this? I still wasnt able to solve this. Thank you.

This looks fixed in jmix 2.1.
I did no additional configuration of reverse proxy and after upgrade I dont have this issue.
I am not sure what was the cause of this. I suspect some bug in Vaadin, but I was unable to identify that.