line 34: 9 Killed /usr/local/tomcat/bin/ run

I’m trying to build ors-app using the entire continental US pbf file. Whenever I attempt to build it, I get this error. line 34: 9 Killed /usr/local/tomcat/bin/ run

Here’s the file:

#!/usr/bin/env bash


if [ -z “${CATALINA_OPTS}” ]; then
export CATALINA_OPTS=" -Djava.rmi.server.hostname=localhost"

if [ -z “${JAVA_OPTS}” ]; then
export JAVA_OPTS="-Djava.awt.headless=true -server -XX:TargetSurvivorRatio=75 -XX:SurvivorRatio=64 -XX:MaxTenuringThreshold=3 -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:ParallelGCThreads=4 -Xms1g -Xmx2g"

echo “CATALINA_OPTS=”$CATALINA_OPTS"" > /usr/local/tomcat/bin/
echo “JAVA_OPTS=”$JAVA_OPTS"" >> /usr/local/tomcat/bin/

if [ “${BUILD_GRAPHS}” = “True” ]; then
rm -rf ${graphs}/*

#if Tomcat built before, copy the mounted app.config to the Tomcat webapp app.config, else copy it from the source
if [ -d “/usr/local/tomcat/webapps/ors” ]; then
cp -f /ors-conf/app.config $tomcat_appconfig
if [ ! -f /ors-conf/app.config ]; then
cp -f $source_appconfig /ors-conf/app.config
echo “### Package openrouteservice and deploy to Tomcat ###”
mvn -q -f /ors-core/openrouteservice/pom.xml package -DskipTests &&
cp -f /ors-core/openrouteservice/target/*.war /usr/local/tomcat/webapps/ors.war

/usr/local/tomcat/bin/ run

#Keep docker running easy
exec “$@”

As the pbf file is quite large, ~7.12 GB, I’ve set the java heap to 20 GB, perhaps a little overkill, but that’s not a bad thing I hope. My mac is maxed out on RAM, 32 GB.

Not quite sure what to do at this point, unless this needs to change “Xms1g -Xmx2g” in the entrypoint file.

Any ideas?

When an error comes up on the run command, it is normally because the string used as the JAVA_OPTS is incorrect and so Java kills the process. Best bet is to check what is in the contents of the /usr/local/tomcat/bin/ file inside the docker container and make sure that it is like JAVA_OPTS="..." (the same for CATALINA_OPTS) with the JAVA_OPTS= outside of quotation marks and the rest inside quotation marks.

Also make sure that you have set the -Xms and -Xmx parameters correctly, and that you are not trying to give more RAM than is currently available (other processes aren’t also using up a lot of RAM) as that will also often cause the process to not even start as Java won’t be able to give it enough RAM

The file looks like this

root@7312c67cdb1b:/usr/local/tomcat/bin# cat
CATALINA_OPTS=" -Djava.rmi.server.hostname=localhost"
JAVA_OPTS="-Djava.awt.headless=true -server -XX:TargetSurvivorRatio=75 -XX:SurvivorRatio=64 -XX:MaxTenuringThreshold=3 -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:ParallelGCThreads=4 -Xms14g -Xmx14g"

So it looks like the quotation marks are correct.

Xms and Xmx are set to 14 GB each, so is it using 14 or 28 GB? I’m not super familiar with the java opts.

While building the ors-app, I have almost 18 GB out of 32 GB free.

I have changed the Xms and Xms several times, but I continue to get the same resulting error

/ line 34: 100 Killed /usr/local/tomcat/bin/ run

I’m not quite sure how to approach this, as I have successfully ran another ors-app that has a pbf file of ~230 mb, while this one is 7.12 GB.

I’d also like to know what logs keep track of the reason that Java killed off tomcat. Checked all my system logs and saw nothing.

Thanks for all the assistance.

Hmm, strange as I can’t see anything that looks obviously wrong there.

-Xms14g tells java to assign 14GB of RAM straight away to the java heap, and -Xmx14G tells it that as soon as it can’t fit the heap in 14GB to stop the process and send an out of memory flag. So it should be assigning 14GB. One thing to try would be to set the value lower (e.g. 7GB) as a test case and see if it gets past the initial startup stage. If it does, then it seems that your system can’t assign the 14GB RAM (possibly there is an override in some place that limits how much of the system RAM Docker can use). If it still fails with the lower value then the usual options of clearing Docker cache and images and then rebuilding from scratch would probably be the next option.

For the logs, java can be a bit of a pain in actually informing what is going on. It should get put in the syslog inside the docker container, but often it is the case that Docker containers don’t create that log.

I’ve tried recreating from scratch, and changing the Xms to 7 GB while leaving the Xmx at 14 GB, but the error is still occuring at exactly the same point. I do see that the memory in the docker resources is currently set to 4 GB, but I’m not sure if this has anything to do with the problem.

I’ve also been able to run another ors-app with a much smaller pbf with no problems.

Google hasn’t been my friend today.

Based on the information on it looks like you are limiting Docker to only have 4GB available to it. SO if you try increasing it in the Docker Desktop and then see if that allows the setting of more. You could also go into a container of one of the instances that works and check there how much RAM the container has (not Java heap space, but RAM) or I think there might be a Docker command to do that. That will telly you if the containers have access to all RAM or just a portion of it

I set the Docker memory to 17 GB, and it didn’t kill the tomcat, but brought it up and began to create the graphs. It was probably a combination of issues with memory but it’s working up to this point. I’ll have to let it run overnight and see if it produces anything.

Thanks for all your help.


Success! It created a working ors-app.

Thanks again.