Here are my observations on trying to run openrouteservice/openrouteservice:v8.1.3 docker graph builds on europe-latest.osm.pbf (30GB) on AWS EC2 64GB and 128GB instances:
Initially, the java process consumes like 200% CPU. Looking at top
, the “Avail mem” column always stays exactly where it is (which means something like 63GB on the 64GB machine). After 10 seconds, CPU goes down to 10% and swap starts going up (because we’re waiting for the disk). Why is openrouteservice swapping even before it’s used all system memory?
On 64GB, I’m using XMS=10g XMX=60g
.
I can only assume the graph build would take ages to finish with all that swapping, given that disk I/O is relatively slow. If I disable swap, after ~15 seconds, the process gets killed by Linux, and dmesg shows entries like:
[ 4056.371136] Memory cgroup out of memory: Killed process 20755 (java) total-vm:66119256kB, anon-rss:4180852kB, file-rss:27012kB, shmem-rss:0kB, UID:0 pgtables:8624kB oom_score_adj:0
(that one was for the 64GB instance)
Even on 128GB RAM, the kernel kills the process (XMS and XMX adjusted accordingly). Am I missing some configuration option that would alleviate this issue? The documentation says that <number of profiles> * <pbf size>
should be ok for building, which means 60GB for the 30GB europe file, and I actually remember successfully doing this exact same import earlier this year on a 64GB RAM machine.
I have two profiles enabled, driving-car
and driving-hgv
.
I also tried lowering XMX=100g
on 128GB, to no avail. That time, I did get a crash dump stating that heap allocation failed, however.