if [ -z “${CATALINA_OPTS}” ]; then
export CATALINA_OPTS=“-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9001 -Dcom.sun.management.jmxremote.rmi.port=9001 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=localhost”
fi
if [ -z “${JAVA_OPTS}” ]; then
export JAVA_OPTS=“-Djava.awt.headless=true -server -XX:TargetSurvivorRatio=75 -XX:SurvivorRatio=64 -XX:MaxTenuringThreshold=3 -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:ParallelGCThreads=4 -Xms1g -Xmx2g”
fi
if [ “${BUILD_GRAPHS}” = “True” ]; then
rm -rf ${graphs}/*
fi
#if Tomcat built before, copy the mounted app.config to the Tomcat webapp app.config, else copy it from the source
if [ -d “/usr/local/tomcat/webapps/ors” ]; then
cp -f /ors-conf/app.config $tomcat_appconfig
else
if [ ! -f /ors-conf/app.config ]; then
cp -f $source_appconfig /ors-conf/app.config
fi
echo “### Package openrouteservice and deploy to Tomcat ###”
mvn -q -f /ors-core/openrouteservice/pom.xml package -DskipTests &&
cp -f /ors-core/openrouteservice/target/*.war /usr/local/tomcat/webapps/ors.war
fi
/usr/local/tomcat/bin/catalina.sh run
#Keep docker running easy
exec “$@”
As the pbf file is quite large, ~7.12 GB, I’ve set the java heap to 20 GB, perhaps a little overkill, but that’s not a bad thing I hope. My mac is maxed out on RAM, 32 GB.
Not quite sure what to do at this point, unless this needs to change “Xms1g -Xmx2g” in the entrypoint file.
When an error comes up on the catalina.sh run command, it is normally because the string used as the JAVA_OPTS is incorrect and so Java kills the process. Best bet is to check what is in the contents of the /usr/local/tomcat/bin/setenv.sh file inside the docker container and make sure that it is like JAVA_OPTS="..." (the same for CATALINA_OPTS) with the JAVA_OPTS= outside of quotation marks and the rest inside quotation marks.
Also make sure that you have set the -Xms and -Xmx parameters correctly, and that you are not trying to give more RAM than is currently available (other processes aren’t also using up a lot of RAM) as that will also often cause the process to not even start as Java won’t be able to give it enough RAM
Hmm, strange as I can’t see anything that looks obviously wrong there.
-Xms14g tells java to assign 14GB of RAM straight away to the java heap, and -Xmx14G tells it that as soon as it can’t fit the heap in 14GB to stop the process and send an out of memory flag. So it should be assigning 14GB. One thing to try would be to set the value lower (e.g. 7GB) as a test case and see if it gets past the initial startup stage. If it does, then it seems that your system can’t assign the 14GB RAM (possibly there is an override in some place that limits how much of the system RAM Docker can use). If it still fails with the lower value then the usual options of clearing Docker cache and images and then rebuilding from scratch would probably be the next option.
For the logs, java can be a bit of a pain in actually informing what is going on. It should get put in the syslog inside the docker container, but often it is the case that Docker containers don’t create that log.
I’ve tried recreating from scratch, and changing the Xms to 7 GB while leaving the Xmx at 14 GB, but the error is still occuring at exactly the same point. I do see that the memory in the docker resources is currently set to 4 GB, but I’m not sure if this has anything to do with the problem.
I’ve also been able to run another ors-app with a much smaller pbf with no problems.
Based on the information on https://docs.docker.com/docker-for-mac/ it looks like you are limiting Docker to only have 4GB available to it. SO if you try increasing it in the Docker Desktop and then see if that allows the setting of more. You could also go into a container of one of the instances that works and check there how much RAM the container has (not Java heap space, but RAM) or I think there might be a Docker command to do that. That will telly you if the containers have access to all RAM or just a portion of it
I set the Docker memory to 17 GB, and it didn’t kill the tomcat, but brought it up and began to create the graphs. It was probably a combination of issues with memory but it’s working up to this point. I’ll have to let it run overnight and see if it produces anything.