Error 404: Could not find point

Hi all,

this seems to be a very common source for errors and I’ve tried to read through as many topics here as I could. A few solved their problems by adding a radius, but as I need the exact routes (for a science project), adding a big radius is unfortunately not an option.

{‘error’: {‘code’: 2010, ‘message’: ‘Could not find point 0: 51.3395556 12.3763956 within a radius of 400.0 meters.’}, ‘info’: {‘engine’: {‘version’: ‘6.3.1’, ‘build_date’: ‘2020-11-24T11:57:00Z’}, ‘timestamp’: 1606308980308}}

start is: 51.33955556, 12.37639556
end is: 51.34457657, 12.37962842

I get the above error message when self-hosting the directions api with this url: “http://localhost:8080/ors/v2/directions/cycling-regular?&start=” + start + “&end=”+ end

It should not be a problem with lat /long coordinates, because I’ve switched those to test it, and if I feed the coordinates to the web interface, it gives me a precise route. I also tried adding my api key, or profile= to the url, it just gives me a different error message.

This is the status:
{“engine”:{“version”:“6.3.1”,“build_date”:“2020-11-24T11:57:00Z”},“services”:[“routing”,“isochrones”,“matrix”,“mapmatching”],“languages”:[“de”,“de-de”,“en”,“en-us”,“es”,“es-es”,“fr”,“fr-fr”,“gr”,“gr-gr”,“he”,“he-il”,“hu”,“hu-hu”,“id”,“id-id”,“it”,“it-it”,“ja”,“ja-jp”,“ne”,“ne-np”,“nl”,“nl-nl”,“pl”,“pl-pl”,“pt”,“pt-pt”,“ru”,“ru-ru”,“zh”,“zh-cn”],“profiles”:{“profile 1”:{“profiles”:“driving-car”,“creation_date”:"",“storages”:{“WayCategory”:{},“HeavyVehicle”:{},“WaySurfaceType”:{},“RoadAccessRestrictions”:{“use_for_warnings”:“true”}},“limits”:{“maximum_distance”:100000,“maximum_distance_dynamic_weights”:100000,“maximum_distance_avoid_areas”:100000,“maximum_waypoints”:50}},“profile 2”:{“profiles”:“cycling-regular”,“creation_date”:"",“storages”:{“HillIndex”:{},“WayCategory”:{},“WaySurfaceType”:{},“TrailDifficulty”:{}},“limits”:{“maximum_distance”:100000,“maximum_distance_dynamic_weights”:100000,“maximum_distance_avoid_areas”:100000,“maximum_waypoints”:50}}}}

I hope I provided enough information! So any ideas on what I might be doing wrong, are very much appreciated!! :slight_smile:

Best, Arusha

Hi @arusha,

did you make sure to use the appropriate data basis (correct osm file)?
Otherwise it will use the default osm file which is (i think) the area of Heidelberg.
You could check by trying to generate a route in the area of the default data set.

Or were you already able to generate some routes and only this one is not working?

Best regards

hi amandus, thanks for your quick reply.
I’ve been trying to test that, because I’ve hade the same assumption.
You’re right, it’s not using the osm file I downloaded, but for Heidelberg it’s working.

I’ve specified the location of the osm file in the docker-compose.yml file as such:

version: '2.4'
    container_name: ors-app
      - 8080:8080
      - 9001:9001
    image: openrouteservice/openrouteservice:latest
  #  build:
  #    context: ../
  #    args:
  #      APP_CONFIG: /Users/aruscha/Desktop/ORS/conf/app.config
  #      OSM_FILE: /Users/aruscha/Desktop/ORS/sachsen-latest.osm.pbf
      - ./graphs:/ors-core/data/graphs
      - ./elevation_cache:/ors-core/data/elevation_cache
      - ./logs/ors:/var/log/ors
      - ./logs/tomcat:/usr/local/tomcat/logs
      - ./conf:/ors-conf
      - /users/aruscha/Desktop/ORS/sachsen-latest.osm.pbf
      - BUILD_GRAPHS=True  # Forces the container to rebuild the graphs, e.g. when PBF is changed
      - "JAVA_OPTS=-Djava.awt.headless=true -server -XX:TargetSurvivorRatio=75 -XX:SurvivorRatio=64 -XX:MaxTenuringThreshold=3 -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:ParallelGCThreads=4 -Xms1g -Xmx2g"
      - " -Djava.rmi.server.hostname=localhost"

I suppose that’s wrong?

yep :wink: You’re not specifying where your PBF should map to in the container, which needs to be /ors-core/data/osm_file.pbf:

      - /users/aruscha/Desktop/ORS/sachsen-latest.osm.pbf:/ors-core/data/osm_file.pbf

When you use docker-compose up -d it will create a new container and build graphs from your PBF.

It’s working now!!
Thank you both!