Specifying DB connection info for deployed application

So in CUBA, setting up the DB connection info for a deployed app was a lot easier - it’s all in the “WAR Settings” dialog, and you can edit all the parameters there.

image

According to the Jmix docs, here: Deployment :: Jmix Documentation - it’s become much more complicated and requires lots of additional setup/etc. Is this the only way, or am I missing something?

This will be very important if we are to move to Jmix. We will have dozens/more sets of 2 containers (one Tomcat, one PostGRES) composed with docker-compose, and in CUBA we just made the PostGRES container named “postgres,” specified postgres for the hostname in the WAR settings, and everything just worked.

Seems there’s no simple way to do this in Jmix, or I’m missing something.

I solved this by making a separate application-deploy.properties file with the DB parameters for runtime and then using -Dspring.profiles.active=deploy to make “deploy” the active Spring profile, and Spring automatically uses that .properties file and it all works.

hi,

when you are within your docker-compose, you can also just use regular environment variables. Spring boot has the standard capability to automatically read the properties as ENV vars (see docs).

This way the configuration is completely externalised, which in case of the DB connection should normally be done anyways as the source code should not contain the DB credentials.

In case of the main datasource for Jmix it would be MAIN_DATASOURCE_URL, MAIN_DATASOURCE_USERNAME & MAIN_DATASOURCE_PASSWORD.

Cheers
Mario

True, that might be a better solution. I just wanted to get SOMETHING working so I could be satisfied we could still deploy as we need to.

Though, if an attacker could access the source code, they could also access the docker-compose.yaml file so… still not going to keep them out of the DB.

Yes, that is true if you keep your passwords in the docket compose file.

It was more meant to illustrate the point how to externalise the settings.

The actual secret injection normally depends on the container orchestration mechanisms.

For docker swarm you can read more about it here: Manage sensitive data with Docker secrets | Docker Documentation, for Kubernetes it uses A similar approach: Secrets | Kubernetes, GCP and AWS ECS have their proprietary ways of injecting secrets from secure wallets.

One thing they all have in common: the never store the plain text secrets in the actual deployment descriptor files (like docker-compose.yml). But the mechanism for injection is oftentimes the system env vars.

Cheers
Mario

We’re not likely to use Swarm or K8s, we won’t need the complexity. What we have is a lot of companies (“clients”) that pay to use our software. The plan is to have a pair of containers for each client - one holding Tomcat with the application, the other holding the PostGRES DB for them - connected with docker-compose. The app is mostly data entry and lookup; there isn’t a TON of processing, so I doubt we’ll need the complexity of Swarm or K8S.