Time to load entire US pbf?

Hi there!

I have set up a Docker environment to run ORS locally, and have pointed it to “US-latest.osm.pbf” (size: 8.16 GB).

This machine has an Intel Core i5, 24GB of DDR3 memory, and a couple terabytes of SSD storage. I have allocated 17GB of memory to the ORS container, and I started running the container at 7:00pm last night-- it’s now 11am the next day, and http://localhost:8080/ors/v2/health is still reporting “not ready.”

The latest log entry from the container is:

02 Nov 09:46:12 INFO [core.PrepareCore] - took:15675, new shortcuts: 37 026 504, prepare|shortest|car-ors, dijkstras:2033862376, t(dijk):14239.76, t(period):7198.97, t(lazy):570.89, t(neighbor):6525.71, meanDegree:8, initSize:44889019, periodic:10, lazy:10, neighbor:90, totalMB:17408, usedMB:13933

As well, my memory usage is sitting at 23.5/23.9GB. I know this is a very large dataset being loaded on a very underpowered machine, but does anyone have an idea of how long it should take to build the graphs and allow queries? Unfortunately this is for a project I’m trying to finish by week’s end, so I can’t really just let it sit for however long and hope it completes. Working on buying a new computer with 64GB of memory, but haven’t gotten departmental approval for that just yet.

Thanks so much!!

Hey,

I can’t tell you how long it should take, but I know that building one profile from spain-latest.pbf (~900 MB) took about one hour on my machine (similar core).

The rule of thumb on memory consumption is that you’d need about 2x the pbf-file in RAM for one profile.

Assuming this scales roughly linearly, it should take some 16GB to build a single profile and should take roughly 10 hours.

If you are building a single profile, it should be done by now. If you’re building more profiles, this will probably crash before long.

Best regards

Hey there,

Thanks for chiming in so fast! I made sure to only build the one profile I needed, which I should have just enough memory for haha.

I guess I’ll write the rest of my code for the project using the ORS cloud hosted API and see if this decides to finish! If not I’ll just batch my requests.

One more question for you: once the graphs are built, does the memory usage go back down until queries are run? This is unfortunately also the machine I was hoping to run the analysis on, so I’ll need a little more than 500MB of RAM free to do so.

Thanks!

Update: Yup, just got an out of memory error. Darn.

Depending on your requirements and at what stage of the build the OOM error occurs, you can try turning off the building of optimised routing algorithms (inside the config file) which should mean it then needs less RAM to build at the expense of routes taking a lot longer to generate. Looking at the message you posted, it seems that it is running out of RAM in the Core-ALT building stage, so updating the config to have the following should (hopefully) allow building (basically remove the core section in preperation and execution:

"preparation": {
  "min_network_size": 200,
  "min_one_way_network_size": 200,
  "methods": {
    "ch": {
      "enabled":  true,
      "threads": 8,
      "weightings": "fastest"
    }
  }
},
"execution": {
  "methods": {
    "astar": {
      "approximation": "BeelineSimplification",
      "epsilon": 1
    },
    "ch": {
      "disabling_allowed": true
    }
  }
}

With this config you would still get fast long distance routes, but they wouldn’t be able to have things like “avoid highways”.

1 Like