Full Planet Build Specs for Routing using Kubernetes

Hi Team,

My aim is to deploy my own openrouteservice instance with planet osm data file (I’ve downloaded it from https://free.nchc.org.tw/osm.planet/pbf/planet-latest.osm.pbf) using Kubernetes.

I’m largely successful doing it with sample osm files
viz New Delhi and Heidelberg (Which comes with the project)

With Kubernetes, I want to skip copying the data file as part of building the image. I have shifted it into entrypoint so the files files gets downloaded first as part of bringing up the pod.

So I have few doubts with respect to full planet build ,

  1. What should be the approximate size of the volume (/ors-core/data/) which can have the elevation_cache and graphs built in it. Is it ~500GB or ~750GB?

  2. You guys have mentioned about the -Xmx param “A good rule of thumb is to give Java 2 x file size of the PBF per profile”, which means when the planet data file size is 50GB and if I want to run the service for only one profile, driving-car. Then the memory allocated for JVM should be 50*1 = 50GB i.e., -Xmx50g.
    If I run for two profiles, driving-car and bike, then the memory should be 50 *2 = 100GB i.e., -Xmx100g.
    Is my understanding correct?
    Can I run 2 profiles with 50G of memory, if yes then what will be the impact on the time?
    Generally how long does it take to run 1 profile for a 50GB planet file with memory set to 50g?

  3. This point is also w.r.t memory, We have init_threads in ors.services.routing which is the number of threads used to initialize graphs. So, If I have 50GB of RAM, how many threads I can have to speed up the process?

  4. If I have the graphs and elevation_cache folders already generated in my local, can I just copy them to my production server at the same path (/ors-core/data) somehow and bring up a new ORS instance pointing to this graphs?

Thanks in advance.

Once I get this up and running for planet osm. I would like to contribute a detailed writeup on Kubernetes deployment of Openrouteservice.

Hi,

  1. For single deployments (no updates), 200 GB are enough
  2. No, 2 x OSM PBF per profile. I.e. 100 GB for 50 GB PBF. Depends again, you can leave a bunch of stuff out to save resources (while negatively impacting performance).
  3. No, single graph generation is not multi-threaded, that’s only relevant for more profiles per configuration.
  4. Sure.

Hi Nils,

Many thanks for the quick reply.
I understood all your points.

  1. Done
  2. Got it. i.e., 100GB per profile for 50GB file.
    Any rough estimate with your experience, how much RAM you guys be using to build ORS for all the profiles with 50GB planet file? How much time does it take to build the graphs?
  3. If I understand correctly, I can use 2 threads, if I want to use 2 routing profiles. Right?
  4. Done.
  1. 128 GB server per profile. ~ 3-5 days to complete with all options.
  2. Right.

Hi nils,

Thanks for your clarifications, I was able to bring up ORS in a single-node(128G memory) K8s cluster.
I understand that such high memory is required to build the graphs which can be consumed by ORS.

My question is with respect to this

And your answer was

Given that graphs are already built, does ORS still require the huge 128GB memory node, or can it be run on a smaller node, say 16GB?

Did you guys do anything of this sort?
What do you recommend be the memory size, if I have the graphs and elevation_cache data with me?

Sure, re-use the graphs, but that has to happen on 128 gb as well. It’s > 70 GB which is held in RAM for the graphs.