AWS EC2 Resource Provisoning


firstly Openroute service is a great tool, thanks for Your amazing work. I was able to get an instance running from the docker-compose in less than a day. But choosing the right AWS EC2 instance type was somewhat difficult. I first tried t3large (2 vcpus, 8GB Memory) and t3xlarge(4 vcpus, 16GB Memory), which didn’t seem to be sufficient for routing services in Germany.

Currently Openroute service is running on a c5a.4xlarge (16 vcpus, 32GB Memory), but I’m wondering if I could scale back on the cpu.

What would you recommend, depending on the number of expected requests? Is fast SSD Storage and a large swap partition feasible?



Hi @AlexNe

in general, a lot of CPUs isn’t necessary unless you are expecting a lot of concurrent requests. In general, we rarely see more that 5 cores being used at once for each profile. I am not sure about the fast SSD and large swap as I have not used that setup before - but in general memory wise you need 2x PBF file size for Java heap. So for Germany you would ~8GB Heap for a single profile. Our setup is that we have dedicated instances for each profile with ~115GB heap space, and then also dedicated build instances each with ~125GB RAM.

So in general, the resource requirements vary depending on how many profiles you build, what optimisation algorithms you have active, and if you have separate builders to make the graphs.

1 Like

Many thanks. That gives me some idea of what to use for Europe. For context, I’d like to create an EU Connectivity Dataset as a basis for analyzing freight movement. For that I need to calculate the most probable route a truck would take between any pair of NUTS2 Regions.

Can the directions API also return unique node identifiers? That’d make the integration logic much simpler and probably more accurate.

Edit: expected → most probable