Session replication with Hazelcast for horizontal scaling — how to avoid re-login on server failover?

We’re running a Jmix 2.7.4 application horizontally scaled behind Nginx with sticky sessions (ip_hash):

User → Nginx (sticky session) → Server A / Server B

The problem:

  1. User is working on Server A
  2. Server A crashes or goes down for maintenance
  3. Nginx redirects user to Server B → session doesn’t exist → forced re-login
  4. Server A comes back healthy → Nginx routes user back to Server A → session doesn’t exist here either → forced re-login again

This creates a very poor user experience — users lose their work context and have to re-login multiple times during a single failover event.

What we want:

When Server A goes down, the user should seamlessly continue working on Server B without re-login. We understand that Vaadin’s server-side UI state (component trees, view state) cannot be replicated between nodes. That’s acceptable — we’re fine with the page reloading. But at
minimum, the authentication and authorization state (user principal, security context, roles/permissions) must survive failover so the user doesn’t have to re-login.

Our stack:

  • Jmix 2.7.4 / Spring Boot
  • Vaadin 24.9
  • Hazelcast 5.5 (already in the project for caching)
  • PostgreSQL
  • Nginx as load balancer

Questions:

  1. Has anyone successfully implemented session replication (at least for authentication state) with Hazelcast or Redis in a Jmix + Vaadin application?
  2. Is there a recommended approach for Jmix apps specifically?

Any guidance or experience would be greatly appreciated.

This shouldn’t happen. The user should stay on Server B until relogin.
Can you provide more information on your app servers and Nginx configuration?

We don’t have a ready solution for this partial replication, but it seems achievable. We will explore this option.

Regards,
Konstantin

Sticky sessions and ip_hash are different thing!

In fact, basic Nginx free edition (not Nginx plus) doesn’t have a sticky session option.
That’s why you have a second re-login in your environment.

Note that large part of nontrivial applications use some kind of a separate authentication service (KeyCloak or similar), in this case authentication info isn’t lost after a server switching. One more reason to use a separate authentication service.

The “forced re-login” issue occurs because we are currently using the ip_hash directive in Nginx for session persistence. When Server A recovers and passes its health check, the ip_hash algorithm deterministically routes the user back to Server A (based on their IP address). Since our application uses local in-memory session management, Server A has no record of the session created while the user was on Server B. Consequently, Server A treats the user as unauthenticated and forces a re-login.

upstream backend_servers {
ip_hash;
server 10.0.0.1:8080; # Server A
server 10.0.0.2:8080; # Server B
}

server {
listen 80;
location / {
proxy_pass http/backend_servers;
# … other proxy settings
}
}

So it looks like a proper sticky session configuration plus external authentication with KeyCloak/OIDC as @albudarov suggested should satisfy your requirement

the authentication and authorization state (user principal, security context, roles/permissions) must survive failover so the user doesn’t have to re-login.

Thanks I’ll try this solution