In my migrated CUBA application, I have data importers that import data (via CSV files) from our legacy system that the CUBA/Jmix system is replacing. They all work flawlessly just as they did in CUBA.
All except one. The one that imports patient notes and their attached documents seems to have a very serious memory issue when it gets to the portion of the data that has attached documents. (This is just a test load, not all of the notes have documents.)
The code that reads in the file to the FileDescriptor
is very simple:
try (FileInputStream inputStream = new FileInputStream(documentFile)){
byte[] fileBytes = new byte[inputStream.available()];
inputStream.read(fileBytes);
fileStorageAPI.saveFile(fileDescriptor, fileBytes);
} catch (FileNotFoundException e) {
e.printStackTrace();
} catch (IOException | FileStorageException e) {
throw new RuntimeException(e);
}
documentFile
is just a java.io.file
object representing the file that has been verified to exist on the server and able to be read.
When the importer gets to the portion of the data that does have attached docs, the memory starts blowing up very quickly.
Using fileLoader.saveStream(fileDescriptor, () -> inputStream);
instead of the byte array method produces the same memory issues.
What am I missing here?