I would like to deploy ORS on a kubernetes cluster using Helm chart, so I have some questions:
Does anyone have example definitions of k8s objects in *.yaml files and could present them here?
I want to deploy ORS in multiple instances. Which of the directories must be on persistent volume and which of them can N instances share?
I am talking about:
data/graphs, /data/elevation_cache, data/*.pbf, ors-conf
After loading a new *.pbf data file, should the number of instances be reduced to 1 for the time of regenerating the graphs directory or does it not affect the number of instances?
Generally: should each instance have a separate volume with data or can it be a shared resource?
In the understanding of k8s, the deployment should be as a deployment or rather as a statefulset?
Thanks in advance for any tips
my pbf file is approx 100mb
What resources for 1 instance should be allocated: cpu, ram, disk?
we don’t have any experience with kubernetes, so no answer to your first question.
For a bit of background, why do you want to deploy the openrouteservice in multiple instances?
In general, pbf files and configuration should be able to be shared between different instances, since they are only read, not written. In a docker setup, they could be mounted read-only.
For the graphs and elevation data, this is a bit different - they’re being written while building the graphs, so they should only be written by one instance at the same time.
Thus, while rebuilding, only running one instance sounds like a good idea.
Regarding resource usage, we assume ~2x the file size of RAM per profile, so running all 9 profiles (car, walking, bike-*, …) should be doable with 2GB RAM. Disk usage should be a bit below that, but in the same region.
As for CPU usage, that probably depends mainly on how quick you want your routes/isochrones/whatnot. A “regular CPU” should suffice for most applications (at least on a 100mb pbf)
Thank you very much for your quick answer.
I want to use 2 instances for redundancy reasons. In case of failure of one of the kubernetes nodes, there was 1 instance that would be unavailable at that point. API requests would be routed to the 2 running instance. Separation of requests is done by kubernetes, by its objects: service and ingress above it.
So I can assume that after building the graphs, both instances can share the cache and directory with graph data?