Hi,
I’m deploying a Jmix (Vaadin) application on OpenShift and I’m running into an issue with Vaadin Push when I put an Nginx layer in front of the Jmix service.
My topology:
- Working path (no problem):
Route → Service → Jmix pods
Vaadin UI works fine, no reload loops, WebSocket stable.
- Failing path:
Route (nginx-route) → Nginx (DMZ) → Service → Jmix pods
When I scale Jmix pods to more than 1, the Vaadin UI starts to reload and I see WebSocket errors. Nginx ConfigMap for the frontend (simplified):
upstream jmix_backend_officer {
server dlcn-officer.backend-d01-ktdlcn.svc.cluster.local:8080;
}
server {
listen 8080;
location / {
proxy_pass http://jmix_backend_officer;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 3600;
proxy_send_timeout 3600;
proxy_buffering off;
}
location ^~ /VAADIN/push {
proxy_pass http://jmix_backend_officer;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_read_timeout 3600;
proxy_send_timeout 3600;
proxy_buffering off;
}
}
Browser console (Vaadin push) shows:
Invoking executeWebSocket, using URL:
wss://nginx-route-kl3init-dev.apps.rm2.thpm.p1.openshiftapps.com/VAADIN/push?v-r=push&...
Atmosphere: websocket.onerror
Atmosphere: websocket.onclose
Websocket closed, reason: Connection was closed abnormally (that is, with no close frame being sent). - wasClean: false
Atmosphere: Firing onReconnect
...
Navigated to https://nginx-route-kl3init-dev.apps.rm2.thpm.p1.openshiftapps.com/login?error
Notes:
-
Jmix is configured with embedded Hazelcast for clustering.
-
When I bypass Nginx and point the Route directly to the Jmix Service, Vaadin Push works fine with multiple pods.
-
Only when I insert Nginx between the Route and the Service, the WebSocket to /VAADIN/push keeps closing abnormally and the UI reloads / redirects to /login?error.
Questions:
-
Is there a recommended Nginx configuration for Jmix/Vaadin Push when running behind an OpenShift Route (TLS termination at Route, then HTTP from Route to Nginx, then HTTP to Jmix Service)?
-
Are there any known issues with Vaadin Push when there are two proxy layers (Route and Nginx)? Do I need to adjust X-Forwarded-* or Host headers in a specific way?
-
Do I need a special location for /VAADIN/push or additional settings (timeouts, proxy_read_timeout, proxy_send_timeout, proxy_buffering, etc.) to keep the WebSocket stable?
-
Any examples of a working Nginx config in front of a Jmix/Vaadin app on OpenShift (Route → Nginx → Service) would be very helpful.
Thanks in advance for any guidance.