Hi Team,
My aim is to deploy my own openrouteservice instance with planet osm data file (I’ve downloaded it from https://free.nchc.org.tw/osm.planet/pbf/planet-latest.osm.pbf) using Kubernetes.
I’m largely successful doing it with sample osm files
viz New Delhi and Heidelberg (Which comes with the project)
With Kubernetes, I want to skip copying the data file as part of building the image. I have shifted it into entrypoint so the files files gets downloaded first as part of bringing up the pod.
So I have few doubts with respect to full planet build ,
-
What should be the approximate size of the volume (/ors-core/data/) which can have the elevation_cache and graphs built in it. Is it ~500GB or ~750GB?
-
You guys have mentioned about the -Xmx param “A good rule of thumb is to give Java 2 x file size of the PBF per profile”, which means when the planet data file size is 50GB and if I want to run the service for only one profile, driving-car. Then the memory allocated for JVM should be 50*1 = 50GB i.e., -Xmx50g.
If I run for two profiles, driving-car and bike, then the memory should be 50 *2 = 100GB i.e., -Xmx100g.
Is my understanding correct?
Can I run 2 profiles with 50G of memory, if yes then what will be the impact on the time?
Generally how long does it take to run 1 profile for a 50GB planet file with memory set to 50g? -
This point is also w.r.t memory, We have
init_threads
in ors.services.routing which is the number of threads used to initialize graphs. So, If I have 50GB of RAM, how many threads I can have to speed up the process? -
If I have the graphs and elevation_cache folders already generated in my local, can I just copy them to my production server at the same path (/ors-core/data) somehow and bring up a new ORS instance pointing to this graphs?
Thanks in advance.
Once I get this up and running for planet osm. I would like to contribute a detailed writeup on Kubernetes deployment of Openrouteservice.