Not enough memory error


After sucessfully setting up a ORS backend using the OSM file of south of Brazil (200 MB), I tried to do with the whole Brazil country (1 GB) but localhost:8080/ors/health got stuck in “not ready” and the ors.log returned some memory errors.

Here follows my docker-compose.yml, I did set both Xms and Xmx to 8g (way more than the 2*PBF size suggested)

I only left “vehicles-car” profile active in app.config.sample to reduce memory usage.
The laptop is running Ubuntu 18.04 with 16 GB RAM.
I appreciate any suggestion for addressing this issue.
If any other file or log is needed please ask me so I can update the post.

Best regards.

I think this is likely because though the pbf size is small for south america, the geographical context is a lot bigger so there is quite a bit of elevation data that would be loaded aswell. The bit that it is failing on is in the landmark processing, and so what I expect is happening is that because the number of threads for preparing is set to 8, having both shortest and fastest calculations happening at the same time is too much. Try doing it again but in the following:


change "threads": 8, to be "threads": 1,

Also you can try changing the value for landmarks as we have payed around with that here and found that 16 is pretty much as good as 24, whilst saving quite a lot of RAM

Hi @adam

Thanks for the suggestions.
I will test your suggestions in this order:

  1. setting threads to 1;
  2. setting landmarks to 16;
  3. deleting shortest weightings.

I will post the results when an option works or after the last suggestion fails.

Best regards.

Hi @adam

Setting threads to 1 still caused the same error, stuck for about 2 hours in not ready.
After setting landmarks to 16 it finally went to “ready” status in 30 minutes.
Thank you.

Best regards.

Hi All,

We changed the configuration to the following but still have the same error. In our case the pbf file is 2G. We set the Xms20g already. From other posts it seems there is a bug in the release. Any other suggestions to fix the issue?

  1. setting threads to 1;
  2. setting landmarks to 16;
  3. deleting shortest weightings.

Caused by: java.lang.OutOfMemoryError: Java heap space
ors-app | at com.carrotsearch.hppc.IntObjectHashMap.allocateBuffers( ~[hppc-0.8.1.jar:?]
ors-app | at com.carrotsearch.hppc.IntObjectHashMap.allocateThenInsertThenRehash( ~[hppc-0.8.1.jar:?]
ors-app | at com.carrotsearch.hppc.IntObjectHashMap.put( ~[hppc-0.8.1.jar:?]
ors-app | at com.graphhopper.routing.AbstractBidirAlgo.fillEdges( ~[graphhopper-core-v0.13.2.jar:?]
ors-app | at com.graphhopper.routing.AbstractBidirAlgo.fillEdgesFrom( ~[graphhopper-core-v0.13.2.jar:?]
ors-app | at com.graphhopper.routing.AbstractBidirAlgo.runAlgo( ~[graphhopper-core-v0.13.2.jar:?]
ors-app | at com.graphhopper.routing.lm.LandmarkStorage$LandmarkExplorer.runAlgo( ~[graphhopper-core-v0.13.2

Hi @peter,

sorry, that was my bad. The app.config wasn’t mirrored properly in the container.

You can try my branch fix-docker, which should get you up and running. You’ll have to rebuild the image though.

Or you wait a few days (or maybe only hours) until it’s merged to master, then it should be usable from Dockerhub as well.

Hi @nils

Is the fix merged to master? If we clone the whole package will it work now?


sorry for the late reply, just seeing this.

it’s merged since yesterday. give it a spin and ideally report back.

1 Like