I started vroom + ors, but I cannot use optimization endpoint

I’ve already started a docker compose instance and everything seems to be running, directions api works, all health check green.

Vroom also started listening on port 3000.

But I cannot make any api calls to http://localhost:8080/ors/v2/optimization endpoint, it gives me 404 error.

Is there something else I need to setup to enable optimization?

Thanks a lot!

Hey,

when configuring vroom (or rather, vroom-express), make sure that you have the routingServers set up correctly in your config.yml, i.e. that the host and port for ors profiles are set correctly.

Then, your optimization queries should go to vroom directly, which will in turn call your configured ors instance.
You don’t have to configure a “detour” via ors :wink:

Best regards

thank you for your quick reply! Will this url work if I configured in vroom?
looks like I need to rebuild the vroom image to use ors instead of the default osrm backend

configure config.yml like this:

cliArgs:
  geometry: false # retrieve geometry (-g)
  planmode: false # run vroom in plan mode (-c) if set to true
  threads: 4 # number of threads to use (-t)
  explore: 5 # exploration level to use (0..5) (-x)
  limit: '1mb' # max request size
  logdir: '/..' # the path for the logs relative to ./src
  logsize: '100M' # max log file size for rotation
  maxlocations: 1000 # max number of jobs/shipments locations
  maxvehicles: 200 # max number of vehicles
  override: true # allow cli options override (-c, -g, -t and -x)
  path: '' # VROOM path (if not in $PATH)
  port: 3000 # expressjs port
  router: 'ors' # routing backend (osrm, libosrm or ors)
  timeout: 300000 # milli-seconds
  baseurl: '/' #base url for api
routingServers:
  osrm:
    car:
      host: '0.0.0.0'
      port: '5000'
    bike:
      host: '0.0.0.0'
      port: '5001'
    foot:
      host: '0.0.0.0'
      port: '5002'
  ors:
    driving-car:
      host: '0.0.0.0'
      port: '8080'
    driving-hgv:
      host: '0.0.0.0'
      port: '8080'
    cycling-regular:
      host: '0.0.0.0'
      port: '8080'
    cycling-mountain:
      host: '0.0.0.0'
      port: '8080'
    cycling-road:
      host: '0.0.0.0'
      port: '8080'
    cycling-electric:
      host: '0.0.0.0'
      port: '8080'
    foot-walking:
      host: '0.0.0.0'
      port: '8080'
    foot-hiking:
      host: '0.0.0.0'
      port: '8080'
  valhalla:
    auto:
      host: '0.0.0.0'
      port: '8002'
    bicycle:
      host: '0.0.0.0'
      port: '8002'
    pedestrian:
      host: '0.0.0.0'
      port: '8002'
    motorcycle:
      host: '0.0.0.0'
      port: '8002'
    motor_scooter:
      host: '0.0.0.0'
      port: '8002'
    taxi:
      host: '0.0.0.0'
      port: '8002'
    hov:
      host: '0.0.0.0'
      port: '8002'
    truck:
      host: '0.0.0.0'
      port: '8002'
    bus:
      host: '0.0.0.0'
      port: '8002'

and docker compose to this:


services:
  vroom:
    network_mode: host
    image: vroomvrp/vroom-docker:v1.11.0
    container_name: vroom
    volumes:
      - ./vroom-conf/:/conf
    environment:
      - VROOM_ROUTER=ors  # router to use, osrm, valhalla or ors
    depends_on:
      - ors

but still cannot access optimization endpoint, i can access and received response for directions api

Hey,

if your system correctly resolves host: '0.0.0.0' to localhost, then this looks like it could work.
Note, that you’ll have to query localhost:3000 for optimization queries, not …ors/v2/optimization.

Best regards

I Still Couldn’t access multiple vechicles optimization endpoint. Can you please help out.

Hey,

without a bit more information, it’s going to be hard to help out.
Could you check that your ors instance is correctly up and running, that your vroom config has correct routingServers configured and that requests go to the vroom port?

Best regards

Yes ors is currectly up and running also it giving response to end points like http://localhost:8080/ors/v2/status or health.

It was said localhost port 3000 /health would respond once vroom is ready, But that one is giving error.
Here is my config.yml file pls check they are any errors. [Please do not post phone numbers, email addresses or social media accounts on forum posts]

cliArgs:
  geometry: false # retrieve geometry (-g)
  threads: 4 # number of threads to use (-t)
  explore: 5 # exploration level to use (0..5) (-x)
  limit: '1mb' # max request size
  logdir: '/..' # the path for the logs relative to ./src
  logsize: '100M' # max log file size for rotation
  maxlocations: 1000 # max number of jobs/shipments locations
  maxvehicles: 200 # max number of vehicles
  override: true # allow cli options override (-g, -t and -x)
  path: '' # VROOM path (if not in $PATH)
  port: 3000 # expressjs port
  router: 'ors' # routing backend (osrm, libosrm or ors)
  timeout: 300000 # milli-seconds
  baseurl: '/' #base url for api
routingServers:
  osrm:
    car:
      host: '0.0.0.0'
      port: '5000'
    bike:
      host: '0.0.0.0'
      port: '5000'
    foot:
      host: '0.0.0.0'
      port: '5000'
  ors:
    driving-car:
      host: '0.0.0.0'
      port: '8080'
    driving-hgv:
      host: '0.0.0.0'
      port: '8080'
    cycling-regular:
      host: '0.0.0.0'
      port: '8080'
    cycling-mountain:
      host: '0.0.0.0'
      port: '8080'
    cycling-road:
      host: '0.0.0.0'
      port: '8080'
    cycling-electric:
      host: '0.0.0.0'
      port: '8080'
    foot-walking:
      host: '0.0.0.0'
      port: '8080'
    foot-hiking:
      host: '0.0.0.0'
      port: '8080'

Hey,

I assume you have installed vroom and are now running vroom-express to be able to query it via HTTP-requests.
Are you certain that the vroom binary is in your $PATH? Your path of your vroom-express config is empty…

Also, note that the page itself will be empty, but it’ll return HTTP200, query e.g. with

curl -w "%{http_code}\n" http://localhost:3000/health

Best regards

Hi,
Could you please say which file path to be in the vroom-express config file. Also, could you let me know what is the endpoint for achieving multiple vehicle optimization.

I came across this post since I was having an issue to make this example, work on a local setup. Since ORS doesn’t have the optimization code embedded in it anymore, you must use vroom+ors for optimization.

here’s a docker-compose.yml with this setup:

version: '2.4'
services:
  ors:
    container_name: ors
    image: openrouteservice/openrouteservice:latest
    ports:
      - 8080:8080
      - 9001:9001
    user: "${ORS_UID:-0}:${ORS_GID:-0}"
    volumes:
      - ./graphs:/ors-core/data/graphs
      - ./elevation_cache:/ors-core/data/elevation_cache
      - ./logs/ors:/var/log/ors
      - ./logs/tomcat:/usr/local/tomcat/logs
      - ./conf:/ors-conf
     # - ./new-york-latest.osm.pbf:/ors-core/data/osm_file.pbf
    environment:
      - BUILD_GRAPHS=True  # Forces the container to rebuild the graphs, e.g. when PBF is changed
      - "JAVA_OPTS=-Djava.awt.headless=true -server -XX:TargetSurvivorRatio=75 -XX:SurvivorRatio=64 -XX:MaxTenuringThreshold=3 -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:ParallelGCThreads=4 -Xms1g -Xmx2g"
      - "CATALINA_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9001 -Dcom.sun.management.jmxremote.rmi.port=9001 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=localhost"

  vroom:
    container_name: vroom2
    image: vroomvrp/vroom-docker:v1.12.0
    network_mode: host
    volumes:
      - ./vroom-conf/:/conf
    environment:
      - VROOM_ROUTER=ors  # router to use, osrm, valhalla or ors
    depends_on:
      - ors

create the folders that the yaml is expecting: mkdir -p vroom-conf conf elevation_cache graphs logs/ors logs/tomcat
start this with docker compose up --build

To test that ORS is running go to: localhost:8080/ors/health
it should return {"status":"ready"}

Then test to make sure ORS has loaded your maps (especially if you are putting your own custom ones) and you are testing coordinates that fall in the loaded maps. If you are using the default ORS docker:
Execute this to see: localhost:8080/ors/v2/directions/driving-car?start=8.681495,49.41461&end=8.687872,49.420318

Now that you’ve confirmed that ORS is working, you need to update the vroom config.
Take down docker (with ctrl+c)

in vroom-conf/config.yml, you’ll need to change the router and baseurl:

router: 'ors' # routing backend (osrm, libosrm or ors)
baseurl: '/optimization/' #base url for api

You may need to make the file writeable with: chmod a+w config.yml

Now restart with docker compose up --build

Test to make sure vroom is up:

curl -w "%{http_code}\n" http://localhost:3000/health

Now make a call to the optimization endpoint by calling vroom, not ORS.
localhost:3000/optimization with your corresponding json (I would recommend using postman to send the request)

vroom will then call ORS to get information and return a response.

2 Likes

I’m trying to make the linked example work too, but on a local setup. I have followed your post and I am able to confirm that ORS and Vroom are working (at least, I am receiving the correct responses when checking “/health” on both URLs), but I cannot get it working with the example.

I believe I’m doing something wrong with this section of code from the example:

# Initialize a client and make the request
ors_client = ors.Client(key='your_key')  # Get an API key from https://openrouteservice.org/dev/#/signup
result = ors_client.optimization(
    jobs=deliveries,
    vehicles=vehicles,
    geometry=True
)

I tried replacing it with this:

# Initialize a client and make the request
ors_client = ors.Client(base_url='http://localhost:8080/ors')
result = ors_client.optimization(
    jobs=deliveries,
    vehicles=vehicles,
    geometry=True
)

But I am receiving a 404 error from the first post.

Could you help please? I have also tried changing the URL to http://localhost:3000, but I’m getting the same error.

Thanks!

Hey,

could you also post the request and the error it produces?

Best regards

try:

ors_client = openrouteservice.Client(base_url='http://localhost:8080/ors')
vroom = openrouteservice.Client(base_url='http://localhost:3000')

# then call normal endpoints with
routes = ors_client.directions(...)

# and vroom with
optimal_route = vroom.optimization(...)

Or
send the optimization request as in this example by passing the url of your vroom instance directly

Thank you so much for your help! I didn’t realize that I also needed to add the vroom base_url and then run the optimization through that, as opposed to ORS. I just assumed that since I could do it that way with the online API, it would be the same for locally hosting it, and I found docs/guides to be a little unclear.

Admittedly, I’m very new to ORS and Vroom, and I’m learning a lot every day, but just sharing my experiences.

I ran into a couple more issues, such as needing to edit the maximum_routes variable (originally limited to 100), as well as maximum_distance (originally limited to 100000), but I upped these to 1000 and 5000000 respectively.

Could you tell me how editing these values will affect the load on my server? Is it mostly during the docker build, or will I experience more CPU/RAM usage per request?

Thanks again!

Hey,

this won’t affect anything during build - these values are respected during runtime, so you will experience more RAM usage.
The ors will beeline-estimate a distance for your route and abort if the estimate exceeds the maximum distance. This might lead to very large search trees in your route search (depending on if you have any optimizations installed) and thus will take up large amounts of RAM.

A similar argument gives the same result for the maximum_routes parameter.

Best regards

Hello there, the vroom optimization does not work in this sense for my case, Here is how Im passing it in my code

# init variables
depot = 36.791548, -1.840116

coords = [[36.686, -1.127],
         [36.6395, -1.0717],
         [36.6439, -1.0553],
         [36.6461, -1.0351],
         [36.6418, -1.0376],
         [36.6353, -1.0548]]

amt =[100,50,200,30,50,200]

open_from = [datetime(2019, 3, 22, 8, 0 , 0), datetime(2019, 3, 22, 8, 30 , 0),
             datetime(2019, 3, 22, 9, 30 , 0), datetime(2019, 3, 22, 10, 30 , 0),
             datetime(2019, 3, 22, 11, 30 , 0), datetime(2019, 3, 22, 1, 30 , 0)]

closes_at = [datetime(2019, 3, 22, 18, 30 , 0), datetime(2019, 3, 22, 19, 30 , 0),
            datetime(2019, 3, 22, 20, 30 , 0), datetime(2019, 3, 22, 21, 30 , 0),
            datetime(2019, 3, 22, 22, 30 , 0), datetime(2019, 3, 22, 23, 30 , 0)]

import openrouteservice as ors
# Define the vehicles
# https://openrouteservice-py.readthedocs.io/en/latest/openrouteservice.html#openrouteservice.optimization.Vehicle
vehicles = []
for idx in range(1):
    vehicles.append(
        ors.optimization.Vehicle(
            id=idx,
            start= depot,
            end = depot,
            # end=list(reversed(depot)),
            capacity=[2000],
#             time_window=[1553241600, 1553284800]  # Fri 8:00-20:00, expressed in POSIX timestamp
        )
    )
deliveries =[]
for ix, d in enumerate(coords):
    deliveries.append(
        ors.optimization.Job(
            id=ix,
            location= d,
            service=1200,  # Assume 20 minutes at each site
            amount=[amt[ix]],
#             time_windows=[[
#                 int(open_from[ix].timestamp()),  # VROOM expects UNIX timestamp
#                 int(closes_at[ix].timestamp())
#             ]]
        )
    )
# now call the vroom optimizer
url_vroom = 'http://localhost:3000'
clientVrp = ors.Client(base_url=url_vroom, key='')

solution = clientVrp.optimization(
    vehicles = vehicles,
    jobs = deliveries,
    geometry = True, 
)

I then receive an error:

---------------------------------------------------------------------------
JSONDecodeError                           Traceback (most recent call last)
File ~\anaconda3\lib\site-packages\requests\models.py:971, in Response.json(self, **kwargs)
    970 try:
--> 971     return complexjson.loads(self.text, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError

File ~\anaconda3\lib\json\__init__.py:346, in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
    343 if (cls is None and object_hook is None and
    344         parse_int is None and parse_float is None and
    345         parse_constant is None and object_pairs_hook is None and not kw):
--> 346     return _default_decoder.decode(s)
    347 if cls is None:

File ~\anaconda3\lib\json\decoder.py:337, in JSONDecoder.decode(self, s, _w)
    333 """Return the Python representation of ``s`` (a ``str`` instance
    334 containing a JSON document).
    335 
    336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
    338 end = _w(s, end).end()

File ~\anaconda3\lib\json\decoder.py:355, in JSONDecoder.raw_decode(self, s, idx)
    354 except StopIteration as err:
--> 355     raise JSONDecodeError("Expecting value", s, err.value) from None
    356 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

JSONDecodeError                           Traceback (most recent call last)
File ~\anaconda3\lib\site-packages\openrouteservice\client.py:229, in Client._get_body(response)
    228 try:
--> 229     body = response.json()
    230 except json.JSONDecodeError:

File ~\anaconda3\lib\site-packages\requests\models.py:975, in Response.json(self, **kwargs)
    972 except JSONDecodeError as e:
    973     # Catch JSON-related errors and raise as requests.JSONDecodeError
    974     # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975     raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)

JSONDecodeError: Expecting value: line 1 column 1 (char 0)

During handling of the above exception, another exception occurred:

HTTPError                                 Traceback (most recent call last)
Input In [45], in <cell line: 4>()
      1 url_vroom = 'http://localhost:3000'
      2 clientVrp = ors.Client(base_url=url_vroom, key='')
----> 4 solution = clientVrp.optimization(
      5     vehicles = vehicles,
      6     jobs = deliveries,
      7     geometry = True,   #VROOM RESTITUISCE LA GEOMETRIA DEL PERCORSO OLTRE ALLA SOLUZIONE
      8 )

File ~\anaconda3\lib\site-packages\openrouteservice\client.py:299, in _make_api_method.<locals>.wrapper(*args, **kwargs)
    296 @functools.wraps(func)
    297 def wrapper(*args, **kwargs):
    298     args[0]._extra_params = kwargs.pop("extra_params", None)
--> 299     result = func(*args, **kwargs)
    300     try:
    301         del args[0]._extra_params

File ~\anaconda3\lib\site-packages\openrouteservice\optimization.py:97, in optimization(client, jobs, vehicles, shipments, matrix, geometry, dry_run)
     94 if matrix:
     95     params['matrix'] = matrix
---> 97 return client.request("/optimization", {}, post_json=params, dry_run=dry_run)

File ~\anaconda3\lib\site-packages\openrouteservice\client.py:204, in Client.request(self, url, get_params, first_request_time, retry_counter, requests_kwargs, post_json, dry_run)
    200     return self.request(url, get_params, first_request_time,
    201                         retry_counter + 1, requests_kwargs, post_json)
    203 try:
--> 204     result = self._get_body(response)
    206     return result
    207 except exceptions._RetriableRequest as e:

File ~\anaconda3\lib\site-packages\openrouteservice\client.py:231, in Client._get_body(response)
    229     body = response.json()
    230 except json.JSONDecodeError:
--> 231     raise exceptions.HTTPError(response.status_code)
    233 # error = body.get('error')
    234 status_code = response.status_code

HTTPError: HTTP Error: 404

whereas if I use the API key this is not the case,

Hi @Cliff_Njoroge,
are you sure you set the vroom config also to your local instance?
see I started vroom + ors, but I cannot use optimization endpoint - #3 by simpian

Best regards

Yup, @amandus have a look at my config files for the vroom config and the docker compose:

Vroom config

cliArgs:
  geometry: false # retrieve geometry (-g)
  planmode: false # run vroom in plan mode (-c) if set to true
  threads: 4 # number of threads to use (-t)
  explore: 5 # exploration level to use (0..5) (-x)
  limit: '1mb' # max request size
  logdir: '/..' # the path for the logs relative to ./src
  logsize: '100M' # max log file size for rotation
  maxlocations: 1000 # max number of jobs/shipments locations
  maxvehicles: 200 # max number of vehicles
  override: true # allow cli options override (-c, -g, -t and -x)
  path: '' # VROOM path (if not in $PATH)
  port: 3000 # expressjs port
  router: 'ors' # routing backend (osrm, libosrm or ors)
  timeout: 300000 # milli-seconds
  baseurl: '/' #base url for API
routingServers:
  ors:
    driving-car:
      host: ors
      port: '8080'
    driving-hgv:
      host: ors
      port: '8080'
    cycling-regular:
      host: ors
      port: '8080'
    cycling-mountain:
      host: ors
      port: '8080'
    cycling-road:
      host: ors
      port: '8080'
    cycling-electric:
      host: ors
      port: '8080'
    foot-walking:
      host: ors
      port: '8080'
    foot-hiking:
      host: ors
      port: '8080'

And for my docker compose

version: '2.4'
services:
  ors:
    container_name: ors
    ports:
      - 8080:8080
      - 9001:9001
    image: openrouteservice/openrouteservice:latest
#    build:
#      context: ../
#      args:
#        ORS_CONFIG: ./openrouteservice/src/main/resources/ors-config-sample.json
#        OSM_FILE: ./openrouteservice/src/main/files/heidelberg.osm.gz
    user: "${ORS_UID:-0}:${ORS_GID:-0}"
    volumes:
      - ./graphs:/ors-core/data/graphs
      - ./elevation_cache:/ors-core/data/elevation_cache
      - ./logs/ors:/var/log/ors
      - ./logs/tomcat:/usr/local/tomcat/logs
      - ./conf:/ors-conf
      - ./kenya-latest.osm.pbf:/ors-core/data/osm_file.pbf
    environment:
      - BUILD_GRAPHS=False  # Forces the container to rebuild the graphs, e.g. when PBF is changed
      - "JAVA_OPTS=-Djava.awt.headless=true -server -XX:TargetSurvivorRatio=75 -XX:SurvivorRatio=64 -XX:MaxTenuringThreshold=3 -XX:+UseG1GC -XX:+ScavengeBeforeFullGC -XX:ParallelGCThreads=4 -Xms1g -Xmx2g"
      - "CATALINA_OPTS=-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=9001 -Dcom.sun.management.jmxremote.rmi.port=9001 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=localhost"
  vroom:
    container_name: vroom
    image: vroomvrp/vroom-docker:v1.12.0
    ports:
      - 3000:3000 
    volumes:
      - ./vroom-conf/:/conf
    environment:
      - VROOM_ROUTER=ors  # router to use, osrm, valhalla or ors
    depends_on:
      - ors

Hey,

without having explicitly tested anything, the problem here seems to be that you are using the optimization method of the vroom client.
This won’t work, since it’ll send your request to localhost:3000/optimization, which vroom will not respond to.

To fix this, you can use the clientVrp.request()-method instead.
Have a look at how the openrouteservice-py package uses the vehicles and jobs-classes here to see any necessary preparations.

Feel free to follow up on this, but make sure to test whether the example queries for VROOM-express work (note the coordinates, adapt if necessary) to check whether the server is there at all.

Best regards

Many thanks for the Assistance and yes the server works fine with a plain JSON post to the server see below

import requests

body = {"jobs":[
 {"id":1,"service":300,"amount":[1],"location":[36.686, -1.127]},
 {"id":2,"service":300,"amount":[1],"location":[36.6395, -1.0717]},
 {"id":3,"service":300,"amount":[1],"location":[36.6439, -1.0553]},
 {"id":4,"service":300,"amount":[1],"location":[36.6353, -1.0548]}],
"vehicles":[
 {"id":1,"profile":"driving-car",
         "start":[36.686, -1.127],
         "end":[36.686, -1.127],
         "capacity":[2000]}],
        "options":{"g":'true'}}

headers = {
    'Accept': 'application/json, application/geo+json, application/gpx+xml, img/png; charset=utf-8',
    'Content-Type': 'application/json; charset=utf-8'
}
response = requests.post('http://localhost:3000', json=body, headers=headers)
print(response)
Out[27]: <Response [200]>