OpenStreetMap

daniel-j-h's Diary

Recent diary entries

Maplibre and Protomaps are a match made in heaven! Together they are democratizing vector maps! There’s no better time to get involved than now; let me set the scene first.

Maplibre SDK with Protomaps vector tiles, map at an angle with a video source playing drone footage video overlayed; data attribution (c) OpenStreetMaps and contributors

Maplibre is the open source map SDK forked from mapbox-gl-js when Mapbox changed their license and moved away from open source in 2020. Maplibre packages up around seven years of state of the art map renderer development and is best in class in terms of map rendering and styling vector tiles.

Protomaps is a relatively new project and it’s fundamentally changing how vector maps are distributed. Protomaps can package up the whole world into a single .pmtiles files of around 100 GB. Static hosting this single file gets us global vector tiles! Magic!

Why are the two projects such a good match? Maplibre offers an amazing map sdk and protomaps enables global vector tile data very efficiently and with little to no engineering. You can host a Protomaps file on your Raspberry Pi, you can put smaller extracts on Github Pages, you can static host them on a server, in the cloud, all with very little engineering required!

Check out what others have to say

In short: Maplibre and Protomaps are democratizing maps! Now’s the best time to get involved!

How to get involved

Over the last weeks I looked into Maplibre and Protomaps to see where we’re at. And while all the bits and pieces are there, it can be daunting to get started and understand how it all fits together.

While Protomaps is solving vector data distribution, for a map we usually also want

  • fonts, we need font files in a specific format for maplibre
  • sprites, for maplibre map icons
  • terrain, for hillshading
  • bathymetry, to style ocean depths

and this is where these projects could need a little bit of help in terms of documentation, examples, tutorials. If you want to get involved, now’s the best time, and I suggest spreading the word about Maplibre and Protomaps.

For custom icons and sprites I started writing down my experience

For terrain there are recent terrain models freely available

For landcover there is a Daylight distribution processing global landcover data

In terms of awareness, Maplibre and Protomaps could need some love, too!

Wiki: the wiki pages

Get involved writing guides and tutorials! For example 3d markers for a map at an angle

Spread the word on mastodon, on mailing lists, in your local community!

Now’s the time to seize the means of map production! Let’s get started!

Servus, Bayern! After releasing robosat v1.2 I trained it on 80cm aerial imagery to predict all buildings in Bavaria. tl;dr - here you can find the checkpoint ready to use.

Robosat is an open source end-to-end pipeline for feature extraction from aerial and satellite imagery seamlessly integrating with OpenStreetMap for automated dataset creation.

I downloaded the CC-BY-3.0 80cm aerial imagery from 2018 for Bavaria the Bayerische Vermessungsverwaltung provides. I then used the open source robosat v1.2 release to automatically create a training and validation dataset and trained it on my gpu rig. Within days we already get reasonable results without manual work involved.

For detailed instructions on how to create a dataset and run the robosat pipeline see my previous diary where I explain the process on drone imagery.

Here is a heatmap of all the z17 tiles in Bavaria where there are buildings

bavaria-dop80-heatmap

And here are two examples for the segmentation probabilities I get

bavaria-dop80-example-0

bavaria-dop80-example-1

In the second image there are buildings in OpenStreetMap but not in the predictions. This is due to the aerial imagery showing construction sites where there are now buildings.


I released the trained checkpoint here which allows you to

  • efficiently predict buildings on your laptop; we provide pre-build docker images for robosat v1.2 to make this a single command

  • use the checkpoint when training your own models on your gpus so that you only have to fine-tune the model and will reach solid results earlier and easier

As usual I’m happy to hear your feedback; hit me up be it in comments here, on Github tickets, or in the robosat channel in the osmus or thespatialcommunity slack.

I’m happy to release RoboSat v1.2.0 — RoboSat is an open source end-to-end pipeline for feature extraction from aerial and satellite imagery seamlessly integrating with OpenStreetMap for automated dataset creation.

This release is powered by major community contributions: state of the art Lovasz pixel-wise segmentation loss, better metrics, a road extraction dataset creation handler, and bug fixes! Thank you to our contributors!

v1.2.0 is also the first release shifting from a Mapbox sponsored development process to a community owned development process (after I quit Mapbox end of last year). New features, bug fixes, training and prediction all happens on my personal GPU rig now.

machine learning open gpu rig

6 x GTX 1080 TI. Yes, the LEDs are necessary - they make it go fast

Here is an overview of what you will find in the v1.2.0 release

The pre-built docker images are now the recommended way of using robosat:


Here is an example for a model trained on an automatically generated road dataset

Here is an example for the new Lovasz loss after the first few epochs already; 80cm / z17 aerial imagery for Bavaria, Germany

And an example of where we fall short. For the following location OpenStreetMap contains buildings robosat did not detect in the aerial imagery because they are construction sites. Should robosat flag construction sites as buildings, too? Should OpenStreetMap not map construction sites as buildings? Or is the imagery outdated? These questions are defining our boundaries of what we can detect.


Here is a preview of what I am currently playing around with for potential future releaes

These features all need to be benchmarked wrt. segmentation improvements, but also model complexity, training and prediction runtime, and similar trade-offs. Stay tuned!


As usual hit me up for feedback be it at OSM hack weekends, on tickets, IRC, or the OSMUS Slack. Also check out previous diary posts about robosat:

I’m happy to announce the RoboSat v1.1.0 release — our open source end-to-end pipeline for feature extraction from aerial and satellite imagery.

This release is powered by major community contributions from features like fine-tuning trained models, to training and prediction speedups, and bug fixes! Thank you to our contributors!

Here is an overview of what you will find in the v1.1.0 release

You can find automatically built Docker images at https://hub.docker.com/r/mapbox/robosat/


For our next release we already have a couple amazing community contributions

Here’s why you should be excited for the Lovasz loss: the following shows a preview

Left to right: image / label / cross entropy / miou / lovasz

In contrast to the current default cross entropy loss the Lovasz loss will be an amazing default for robosat’s binary models and greatly improve common feature extraction use-cases!

Again, thanks for these amazing community contributions!


As usual hit us up for feedback be it at OSM hack weekends, on tickets, IRC, or the OSMUS Slack. Also check out previous diary posts about robosat:

RoboSat ❤️ Tanzania

Posted by daniel-j-h on 5 July 2018 in English.

Recently at Mapbox we open sourced RoboSat our end-to-end pipeline for feature extraction from aerial and satellite imagery. In the following I will show you how to run the full RoboSat pipeline on your own imagery using drone imagery from the OpenAerialMap project in the area of Tanzania as an example.

Goal

For this step by step guide let’s extract buildings in the area around Dar es Salaam and Zanzibar. I encourage you to check out the amazing Zanzibar Mapping Initiative and OpenAerialMap for context around the drone imagery and where these projects are heading.

High-level steps

To extract buildings from drone imagery we need to run the RoboSat pipeline consisting of

  • data preparation: creating a dataset for training feature extraction models
  • training and modeling: segmentation models for feature extraction in images
  • post-processing: turning segmentation results into cleaned and simple geometries

I will first walk you through creating a dataset based on drone imagery available on OpenAerialMap and corresponding building masks bootstrapped from OpenStreetMap geometries. Then I will show you how to train the RoboSat segmentation model to spot buildings in new drone imagery. And in the last step I will show you how to use the trained model to predict simplified polygons for detected buildings not yet mapped in OpenStreetMap.

Data Preparation

The Zanzibar Mapping Initiative provides their drone imagery through OpenAerialMap.

Here is a map where you can manually navigate this imagery.

To train RoboSat’s segmentation model we need a dataset consisting of Slippy Map tiles for drone images and corresponding building masks. You can think of these masks are binary images which are zero where there is no building and one for building areas.

Let’s give it a try for Dar es Salaam and Zanzibar, fetching a bounding box to start with.

Let’s start with extracting building geometries from OpenStreetMap and figuring out where we need drone imagery for training dataset. To do this we need to cut out the area we are interested in from OpenStreetMap.

Our friends over at GeoFabrik provide convenient and up-to-date extracts we can work with. The osmium-tool then allows us to cut out the area we are interested in.

wget --limit-rate=1M http://download.geofabrik.de/africa/tanzania-latest.osm.pbf

osmium extract --bbox '38.9410400390625,-7.0545565715284955,39.70458984374999,-5.711646879515092' tanzania-latest.osm.pbf --output map.osm.pbf

Perfect, now we have a map.osm.pbf for Dar es Salaam and Zanzibar to extract building geometries from!

RoboSat comes with a tool rs extract to extract geometries from an OpenStreetMap base map.

rs extract --type building map.osm.pbf buildings.geojson

Now that we have a buildings.geojson with building geometries we need to generate all Slippy Map tiles which have buildings in them. For buildings zoom level 19 or 20 seems reasonable.

rs cover --zoom 20 buildings.geojson buildings.tiles

Based on the generated buildings.tiles file we then can

  • download drone imagery tiles from OpenAerialMap, and
  • rasterize the OpenStreetMap geometries into corresponding mask tiles

Here is a preview of what we want to generate and train the segmentation model on.

If you look closely you will notice the masks are not always perfect. Because we will train our model on thousands of images and masks, a slightly noisy dataset will still work fine.

The easiest way for us to create the drone image tiles is through the OpenAerialMap API. We can use its /meta endpoint to query all available drone images within a specific area.

http 'https://api.openaerialmap.org/meta?bbox=38.9410400390625,-7.0545565715284955,39.70458984374999,-5.711646879515092'

The response is a JSON array with metadata for all drone imagery within this bounding box. We can filter these responses with jq by their attributes, e.g. by acquisition date or by user name.

jq '.results[] | select(.user.name == "ZANZIBAR MAPPING INITIATIVE") | {user: .user.name, date: .acquisition_start, uuid: .uuid}'

Which will give us one JSON object per geo-referenced and stitched GeoTIFF image.

{
  "user": "ZANZIBAR MAPPING INITIATIVE",
  "date": "2017-06-07T00:00:00.000Z",
  "uuid": "https://oin-hotosm.s3.amazonaws.com/5ac7745591b5310010e0d49a/0/5ac7745591b5310010e0d49b.tif"
}

Now we have two options

  • download the GeoTIFFs and cut out the tiles where there are buildings, or
  • query the OpenAerialMap API’s Slippy Map endpoint for the tiles directly

We can tile the GeoTIFFs with a small tool on top of rasterio and rio-tiler. Or for the second option we can download the tiles directly from the OpenAerialMap Slippy Map endpoints (changing the uuids).

rs download https://tiles.openaerialmap.org/5ac626e091b5310010e0d480/0/5ac626e091b5310010e0d481/{z}/{x}/{y}.png building.tiles

Note: OpenAerialMap provides multiple Slippy Map endpoints, one for every GeoTIFF.

In both cases the result is the same: a Slippy Map directory with drone image tiles of size 256x256 (by default; you can run the pipeline with 512x512 images for some efficiency gains, too).

To create the corresponding masks we can use the extracted building geometries and the list of tiles they cover to rasterize image tiles.

rs rasterize --dataset dataset-building.toml --zoom 20 --size 256 buildings.geojson buildings.tiles masks

Before rasterizing we need to create a dataset-building.toml; have a look at the parking lot config RoboSat comes with and change the tile size to 256 and the classes to background and building (we only support binary models right now). Other configuration values are not needed right now and we will come back to it later.

With downloaded drone imagery and rasterized corresponding masks, our dataset is ready!

Training and modeling

The RoboSat segmentation model is a fully convolutional neural net which we will train on pairs of drone images and corresponding masks. To make sure these models can generalize to images never seen before we need to split our dataset into:

  • a training dataset on which we train the model on
  • a validation dataset on which we calculate metrics on after training
  • a hold-out evaluation dataset if you want to do hyper-parameter tuning

The recommended ratio is roughly 80/10/10 but feel free to change that slightly.

We can randomly shuffle our building.tiles, split it into three files according to our ratio, and use rs subset to split the Slippy Map directories.

rs subset images validation.tiles dataset/validation/images
rs subset masks validation.tiles dataset/validation/labels

Repeat for training and evaluation.

Before training the model we need to calculate the class distribution since background and building pixels are not evenly distributed in our images.

rs weights --dataset dataset-building.toml

Save the weights in the dataset configuration file, which training will then pick up. We can now adapt the model configuration file, e.g. enabling GPUs (CUDA) and then start training.

rs train --model model-unet.toml --dataset dataset-building.toml

For each epoch the training process saves the current model checkpoint and a history showing you the training and validation loss and metrics. We can pick the best model, saving its checkpoint, looking at the validation plots.

Using a saved checkpoint allows us to predict segmentation probabilities for every pixel in an image. These segmentation probabilities indicate how likely each pixel is background or building. We can then turn these probabilities into discrete segmentation masks.

rs predict --tile_size 256 --model model-unet.toml --dataset dataset-building.toml --checkpoint checkpoint-00038-of-00050.pth images segmentation-probabilities

rs masks segmentation-masks segmentation-probabilities

Note: both rs predict as well as rs mask transform Slippy Map directories and create .png files with a color palette attached for visual inspection.

These Slippy Map directories can be served via an HTTP server and then visualized directly in a map raster layer. We also provide an on-demand tile server with rs serve to do the segmentation on the fly; it’s neither efficient nor handles post-processig (tile boundaries, de-noising, vectorization, simplification) and should only be used for debugging purpose.

If you manually check the predictions you will probably notice

  • the segmentation masks already look okay’ish for buildings
  • there are false positives where we predict buildings but there is none

The false positives are due to how we created the dataset: we bootstrapped a dataset based on tiles with buildings in them. Even though these tiles have some background pixels they won’t contain enough background (so called negative samples) to properly learn what is not a building. If we never showed the model a single image of water it has a hard time classifying it as background.

There are two ways for us to approach this problem:

  • add many randomly sampled background tiles to the training set, re-compute class distribution weights, then train again, or
  • use the model we trained on the bootstrapped dataset and predict on tiles where we know there are no buildings; if the model tells us there is a building put these tiles into the dataset with an all-background mask, then train again

The second option is called “hard-negative mining” and allows us to come up with negative images which contribute most to the model learning about background tiles. We recommend this approach if you want a small, clean, and solid dataset and care about short training time.

For hard-negative mining we can randomly sample tiles which are not in building.tiles and predict on them with our trained model. Then make use of the rs compare tool to create images visualizing the images without buildings in them and the prediction next to it.

rs compare visualizations images segmentation-masks

After making sure these are really background images and not just unmapped buildings in OpenStreetMap, we can put the negative samples into our dataset with a corresponding all-background mask. Then run rs weights again, update the dataset config, and re-train.

It is common to do a couple rounds of hard-negative mining and re-training, resulting in a solid and small dataset which helps the model most for learning.

Congratulations, you now have a solid model ready for prediction!

Here are the segmentation probabilities I got out after spending a few hours of hard negative mining.

Interesting to see here is the model is not entirely sure about building construction sites. It is on us to make an explicit decision when creating the dataset and when doing hard-negative mining: do we want to include building construction sites or not.

These edge-cases occur with all features and make up the boundaries of your feature’s visual appearance. Are house-boats still buildings? Are parking lots without parking aisle lane markings still parking lots? Make a call and be consistent.

Finally, the post-processing steps are responsible for turning the segmentation masks into simplified and vectorized GeoJSON features potentially spanning multiple tiles. We also tools for de-duplicating detections against OpenStreetMap to filter out already mapped features.

I won’t go into post-processing details in this guide since the segmentation masks based on this small training dataset are still a bit rough to make it properly work well and the RoboSat post-processing is currently tuned to parking lots on zoom level 18 and I had to make some in-place adaptions when running this out.

Summary

In this step-by-step guide we walked through the RoboSat pipeline from creating a dataset, to training the segmentation model, to predicting buildings in drone imagery. All tools and datasets used in this guide are open source and openly available, respectively.

Give it a try! https://github.com/mapbox/robosat

I’m happy to hear your feedback, ideas and use-cases either here, on a ticket, or by mail.

At Mapbox we are happy to open source RoboSat our production ready end-to-end pipeline for feature extraction from aerial and satellite imagery. In the following I describe technical details, how it will change the way we make use of aerial and satellite imagery, and how OpenStreetMap can benefit from this project.

Berlin aerial imagery, segmentation mask, building outlines, simplified GeoJSON polygons

Live on-demand segmentation tile server for debugging purpose

Here is how RoboSat works.

The prediction core is a segmentation model — a fully convolutional neural net which we train on pairs of images and masks. The aerial imagery we download from our Mapbox Maps API in all its beauty. The masks we extract from OpenStreetMap geometries and rasterize them into image tiles. These geometries might sometimes be coarsely mapped but automatically extracting masks allows us to quickly bootstrap a dataset for training.

We then have two Slippy Map directory structures with images and corresponding masks. The Slippy Map directory structure helps us in preserving a tile’s geo-reference which will allow us later on to go back from pixels to coordinates. It is RoboSat’s main abstraction and most pipeline steps transform one Slippy Map directory into another Slippy Map directory.

We then train our segmentation models on (potentially multiple) GPUs and save their best checkpoints. We implemented our model architectures in PyTorch and are using GPUs, specifically AWS p2/p3 instances, and a NVIDIA GTX 1080 TI to keep our Berlin office warm during winter.

When we use the checkpoints for prediction on a Slippy Map directory with aerial imagery we get a Slippy Map directory with probabilities for every pixel in image tiles:

Parking lot prediction; probability scales saturation (S) in HSV colorspace

Which we turn into segmentation masks handling model ensembles and tile borders:

Smooth predictions across tile boundaries. Do you see tile boundaries here? No? Great!

Serializing the probabilities in quantized form and only storing binary model outputs allows us to save results in single-channel PNG files which we can attach continuous color palettes to for visualization. We do the same for masks and then make use of PNG compression to save disk space when scaling up this project e.g. across all of North America.

Based on the segmentation masks we then do post-processing to remove noise, fill in small holes, find contours, handle (potentially nested) (multi-)polygons, and simplify the shapes with Douglas-Peucker:

Segmentation masks, noise removal, restoring connectivity, finding contours

We then transform pixels in Slippy Map tiles into world coordinates: GeoJSON features. In addition we handle tile borders and de-duplicate against OpenStreetMap to filter out predictions which are already mapped.

The end result is a GeoJSON file with simplified (multi-)polygon recommendations. Thanks robots!

Here is an example visualizing the prediction pipeline:

Aerial imagery, segmentation probabilities, masks, extracted features, merging features across tile boundaries

I see RoboSat as a building block for multiple use-cases and projects:

  • RoboSat can “look” at every edit in OpenStreetMap in real-time to flag suspicious changesets. At the same time it can help to let good looking changesets go through without manual verification.
  • RoboSat can tell you how complete the map is in a specific area for a specific feature. For example: “Buildings in Seattle are 90% mapped”. And then it can show you unmapped buildings and polygon recommendations for them.
  • RoboSat can be integrated into imagery platforms like OpenAerialMap or toolchains like OpenDroneMap to generate a better understanding of the area minutes after flying your drone.

And while the possibilities are endless I want to emphasize that RoboSat is neither meant for fully automated mapping nor capable of doing so. We will use RoboSat as a supporting tool but not for automated imports.

In the coming months we will invest into RoboSat to expand it to multiple features like buildings and roads (what we already have internally; see images at the top), better handle variations in geography, imagery quality and zoom levels — all while making sure the pipeline stays generic and scalable.

If you want to give RoboSat or related projects a go check out Pratik’s note about using Mapbox imagery for machine learning.

Happy to hear your feedback; and feel free to open issues in the RoboSat repository for feature requests, ideas, questions, or bug reports :)

Hallo RoboSat!

Posted by daniel-j-h on 12 June 2018 in German (Deutsch).

Wir (Mapbox) haben soeben RoboSat unter einer MIT Lizenz auf Github veroeffentlicht. RoboSat ist ein generisches Ecosystem um aus Luft- und Satellitenbildern Daten zu extrahieren. Im Folgenden beschreibe ich die technische Umsetzung und wie es fuer OpenStreetMap eingesetzt werden kann.

Luftbilder von Berlin, Segmentierungsmasken, Gebaeudeumrisse, vereinfachte GeoJSON Polygone

On-demand Tileserver fuer Debugging-Zwecke

Hier ist ein Ueberblick ueber RoboSat.

RoboSat’s Kern ist ein Segmentierungsmodel — ein “fully convolutional” Neuronales Netz das wir auf Paaren von Bildern und Masken trainieren. Die Luftbilder entnehmen wir der Mapbox Maps API; die Masken generieren wir aus OpenStreetMap Geometrien die wir als Bilder rasterisieren. Dadurch koennen wir relativ schnell und vor allem komplett automatisch einen Datensatz zum Trainieren des Segmentierungsmodels generieren (und dann manuell verfeinern).

Der Datensatz besteht dann aus zwei Slippy Map Verzeichnisstrukturen mit Bildern und Masken. Die Slippy Map Verzeichnisstruktur hilft uns die Geo-Referenzierung der Bilder und Masken aufrecht zu erhalten, was wir spaeter benoetigen um von Pixel zurueck zu Koordinaten zu kommen. Allgemein ist die Slippy Map Verzeichnisstruktur die Hauptdatenstruktur in RoboSat: die meisten Tools transformieren eine Slippy Map Verzeichnisstruktur in eine Andere.

Danach trainieren wir das Segmentierungsmodel und speichern die besten Checkpoints. Wir haben die Segmentierungsmodelle in PyTorch implementiert und lassen sie auf AWS p2/p3 Instanzen, und einer GTX 1080 TI die unser Berliner Buero im Winter warm haelt, laufen.

Wenn wir dann die trainierten Modelle auf einer Slippy Map Verzeichnisstruktur von Luft- bzw. Satellitenbildern laufen lassen erhalten wir Wahrscheinlichkeiten fuer jeden Pixel in den Bildern:

Parkplatz Wahrscheinlichkeiten; Wahrscheinlichkeiten skalieren S-Komponente im HSV Farbraum

Diese Wahrscheinlichkeiten wandeln wir dann in Masken um, wobei wir Slippy Map Tile Grenzen und Wahrscheinlichkeiten von mehreren Modellen handhaben:

Masken fuer Slippy Map Tile Grenzen. Keine Tile Grenzen zu sehen? Perfekt!

Wir speichern sowohl Wahrscheinlichkeiten als auch Masken in optimierter Form als PNG Dateien mit einem einzigen Kanal und fuegen eine Farbpalette hinzu die es uns erlaubt diese Tiles direkt in einem Kartenlayer anzuzeigen.

Basierend auf den Masken entfernen wir dann Rauschen, fuellen kleinere Luecken, Extrahieren Kontouren und darauf aufbauend (Multi-)Polygone, die wir schlussendlich mittels Douglas-Peucker simplifizieren:

Masken, Rauschen entfernen, Luecken Fuellen, Kontouren Finden

Hier erlaubt uns die Geo-Referenzierung der Slippy Map Verzeichnisstruktur aus Pixel in Bilder GeoJSON Features mit Koordinaten zu generieren. Noch dazu verschmelzen wir Features ueber Tile Grenzen und de-duplizieren gegen OpenStreetMap Geometrien die bereits gemappt sind.

Als Resultat bekommen wir eine GeoJSON Datei mit simplifizierten (Multi-)Polygonen!

Hier ist eine Animation der Segmentierungspipeline:

Luftbilder, Wahrscheinlichkeiten, Masken, extrahierte Features, Features ueber Tile Grenzen

Ich sehe RoboSat als Baustein fuer mehrere Anwendungsfaelle und Projekte:

  • RoboSat kann sich jede OpenStreetMap Aenderung in Echtzeit “ansehen” und potentielle Falsche Aenderungen markieren. Gleichzeitig koennen wir Aenderungen durch unsere Validierungspipeline durchlassen, fuer die RoboSat das Okay gibt.
  • RoboSat kann eine Vollstaendigkeitsanalyse durchfuehren und uns sagen wie Vollstaendig OpenStreetMap in bestimmten Bereichen fuer bestimmte Features ist. Z.b. “Gebaeude in Bamberg sind zu 90% gemappt”, und fuer die noch nicht gemappten Gebaeude Vorschlaege generieren.
  • RoboSat kann in Platformen wir OpenAerialMap und Toolchains wir OpenDroneMap integriert werden um Minuten nach dem Drohnenueberflug eine automatische Auswertung zu generieren.

Wenn auch die Moeglichkeiten ohne Grenzen sind moechte ich festhalten, dass RoboSat weder fuer vollstaendig automatisches Mappen gedacht ist, noch dazu ausgelegt ist. Wir werden RoboSat als unterstuetzendes Werkzeug einsetzen, aber nicht fuer automatische Importe.

In den folgenden Monaten werden wir weiter in RoboSat investieren um mehr Features zu implementieren, wie z.b. Gebaeude und Strassen (was wir intern bereits haben; siehe Bilder oben), um Variationen in Bildqualitaet und Zoomlevels besser handzuhaben, und dabei sicher gehen dass RoboSat generisch und skalierbar bleibt.

Randbemerkung: Pratik hat vor Kurzem einen Diary Eintrag ueber die Nutzungsbedingungen von Mapbox Satellitenbilder fuer Machine Learning Use-Cases geschrieben. Fuer weitere Luftbilder rate ich einen Blick in die Wiki.

Ich bin offen fuer Feedback und konstruktiver Kritik; fuer Feature Requests, Ideen, oder Bug-Reports bitte ein Ticket im RoboSat Repository oeffnen! :)

Alternative Routes

Posted by daniel-j-h on 25 May 2018 in English.

In the following I describe how we implemented reasonable alternative routes in the Open Source Routing Machine’s Multi-Level Dijkstra mode.

Final result: generating reasonable alternative routes incredibly efficiently

Our main goals when designing and implementing reasonable alternative routes were two-fold:

  • provide reasonable alternative routes for end-users and use-cases such as generating many alternative routes letting users or developers decide between them (e.g. runner’s app, start and end is the same but provide different route each day), and
  • guarantee for fast query times in the tens to lower hundreds of milliseconds to fit the Open Source Routing Machine’s trade-off of doing heavy pre-processing and then get incredibly fast exact responses

But what exactly do we mean by “reasonable” alternative routes? When we find a fastest route and compute alternatives these are the criteria we try to optimize for:

  • the alternative route must not be x% longer than the fastest route; it also depends on the route’s length (think: 10 minutes vs 13 minutes is fine, 10 hours vs 13 hours is not)
  • the alternative route must be x% different than the fastest route
  • the alternative route must be “locally optimal”: x% of the detour is a fastest route itself (think: if the alternative route takes a different highway than the fastest route then you’re on the fastest route to your destination on this highway)

We can tune these thresholds to either make the alternatives look better or return more alternatives to the user.

There are subtle but essential differences in what we implemented compared to what users might expect:

  • we do not specifically return a route e.g. using highways and a route avoiding highways. The avoid-x feature works indepedently of the alternatives implementation and users can make two requests: one avoiding highways and one using highways.
  • we do not return a route e.g. based on fastest time and a route based on shortest distance. Our alternative routes are fastest routes taking traffic and other penalties into account. Different metrics still need different graphs (profiles) until support for multi-metric lands.

Here’s the general idea for the so called Via-Method we implemented for finding alternative routes for a s -> t request:

  • start bi-directional Dijkstra forward search from s. Save all vertices
  • start bi-directional Dijkstra reverse search from t. Save all vertices
  • when both search spaces meet (and you usually would stop and reconstruct the shortest path) continue both searches “for a bit”
  • intersect both vertex sets: these are the candidate vertices
  • for all candidate vertices c a (potentially arbitrarily bad) alternative route is (s, c, t)
  • evaluate and filter alternative routes based on stretch, overlap, how “reasonable” it is

Note: for different techniques check the extensive list of resources linked at the end.

Here is an example for the set of alternative routes you get initially without any pruning:

The main problem here is: the candidate set is huge and needs to be pruned for reasonable candidates.

In addition the filtering needs to be fast and happen gradually: to stay within the time bounds we can not e.g. extract and evaluate the detailed path of every single alternative candidate. We need to filter alternative candidates down step by step, with cheap high-level criteria first, and expensive detailed criteria last.

For the initial via-vertices we make use of the Inertial Flow graph partition (osrm-partition). The graph partitioner recursively (and fully in parallel) cuts the graph “in half” using Dinic’s algorithm for the min-cuts.

Here is the search space we then take via-vertices from to construct alternative route candidates. We know all routes from s to t have to go through these candidate nodes due to the partition:

When we then use the partitioner’s lowest level cells to filter out alternative route candidates that are “too similar” we already can see improvements:

We then focus on local optimality. The overall problem we try to solve is as follows: we could pick a via node for which the alternative path would make an unreasonable detour. Think of picking a via node which leads to a route leaving the highway and then immediately going onto it again.

To guarantee for local optimality around the via-vertices we take a look at the two sub-paths around a via node and use the forward and reverse heaps to check if they are optimal.

Local sub-path is optimal:

Only then we unpack the remaining alternative candidates for their detailed paths and run extensive high-quality evaluation checks resulting in these results:

Last but not least all of the above has to happen on all pair-wise alternative routes: when we return multiple alternative routes they not only have to be reasonable compared to the shortest path but to all other alternative routes themselves, too.

Give it a try, happy to hear your feedback!

I would like to thank Moritz Kobitzsch who literally has a PhD in alternative routes and was sitting across my desk guiding this adventure!

If you are interested I encourage you to take a look at the original ticket, pull request and implementation. Also check out the extensive list of resources below for further details.

Resources:

Alternativrouten

Posted by daniel-j-h on 25 May 2018 in German (Deutsch).

Im Folgenden beschreibe ich wie wir Alternativrouten in den Multi-Level Dijkstra Algorithmus der Open Source Routing Machine implementiert haben.

Endprodukt: effiziente und plausible Alternativrouten

Unser Fokus beim Designen und Implementieren von Alternativerouten lag dabei auf:

  • hoch-qualitative Alternativerouten zu generieren: fuer den Endanwender und fuer Anwendungsfaelle die von vielen Alternativerouten profitieren koennen (z.B. eine Jogging-App bei der Start und Ziel ueber die Woche hinweg gleich bleiben aber die Route sich aendern soll), und
  • gleichzeitig fuer schnelle Antworten der Routingengine garantieren zu koennen, die im Bereich von 10-100 ms liegen und damit in das Konzept der Open Source Routing Machine passen wobei pre-processing benoetigt wird aber dafuer extrem effizient und schnelle exakte Routen berechnet werden koennen.

Hier sind die Kriterien nach denen wir die Qualitaet der Alternativerouten beurteilen:

  • die Alternativroute soll nicht laenger als x% verglichen zur schnellsten Route sein; das haengt natuerlich auch von der Laenge der Route ab (z.b. ist eine Alternativroute von 13 Minuten okay wenn die schnellste Route 10 Minuten betraegt; 13 Stunden Alternativerouten bei 10 Stunden schnellste Route ist nicht mehr okay fuer den Nutzer)
  • die Alternativeroute soll sich um mindestens x% verglichen zur schnellsten Route unterscheiden
  • die Alternativeroute soll “lokal optimal” sein: x% der Alternativerouten-Umweg ist wiederum eine schnellste Route

Wir koennen an diesen Parametern nun so drehen dass wir entweder Alternativerouten hoeherer Qualitaet bekomme oder generell mehr Alternativerouten potentiell schlechterer Qualitaet.

Was wir fuer diese Implementierung nicht betrachten:

  • wir geben dem Nutzer keine explizite Mischung an Alternativerouten bestehend z.b. aus “Nutze Autobahn” und “Vermeide Autobahn”. Dafuer gibt es ein unabhaengiges Feature in der Routing engine.
  • wir geben dem Nutzer keine multi-metric Alternativerouten, also keine Alternativerouten basierend z.b. auf Zeit, Distance, und Aehnlichem. Die Alternativerouten basieren auf einer einzigen Metric die von den Profilen und Traffic Daten bestimmt wird.

Hier ist der grobe Ablauf beschrieben wir wir Alternativerouten mitterls der so genannten “Via-Methode” berechnen:

  • starte die bi-direktionale Dijktra Vorwaertssuche von s; speichere alle besuchten Knoten
  • starte die bi-direktionale Dijktra Rueckwartssuche von t; speichere alle besuchten Knoten
  • wenn sich beide Suchraeume treffen (und die Suche normal abgebrochen werden kann, da eine schnellste Route gefunden wurde), lasse beide Suchen “ein bisschen” weiterlaufen
  • schneide die zwei Mengen an Knoten: das sind die Knoten fuer die beide Suchraeume sich ueberlappen.
  • fuer alle dieser Knoten c kann eine (potentiell schlechte) Alternativeroute erstellt werden ueber die Route (s, c, t)
  • evaluiere und filtere diese Alternativerouten Kandidaten basierend auf den oben genannten Kriterien

Im Folgenden schoen zu sehen wie diese Alternativerouten Kandidaten aussehen:

Das Hauptproblem ist nun dass es etliche Alternativerouten Kandidaten gibt die gefiltert werden muessen.

Das Filtern muss also extrem effizient und schnell sein. Aus diesem Grund filtern wir schrittweise, von inexakten groben Filtern am Anfang die z.b. nicht die genaue Geometrie entpacken muessen, zu hoch-qualitativen Filtern die wegen Laufzeitbeschraenkungen auf nur noch wenigen Alternativerouten Kandidaten laufen koennen.

Die initialen Via-Knoten bekommen wir mittels Intertial Flow Graph-Partitioniers (osrm-partition). Der Graph-Partitioniere schneided den Graph rekursiv (und komplett parallelisiert) mittels Dinic’s Min-Cut Algorithmus in zwei Stuecke.

Hier ist der Suchraum fuer eine Route Muenchen – Berlin aus dem wir die Via-Knoten fuer die Alternativerouten Kandidaten entnehmen. Dank des Graph-Partitioniers wissen wir dass alle Routen durch diese Via-Knoten gehen muessen.

Wir koennen dann die untersten Level der Partition verwenden um Alternativerouten Kandidaten zu filtern die “zu aehnlich” sind, ohne dass wir die Geometrien der Routen entpacken muessen (was extrem teuer waere).

Ein weiteres Kriterium fuer die Qualitaet der Alternativerouten ist die so genannte Lokale Optimalitaet. Das Problem das Lokale Optimalitaet versucht zu loesen sind Alternativerouten die kurzfristig einen unzumutbaren Umweg nehmen. Das koennte beispielsweise so aussehen, dass die Alternativeroute von der Autobahn kurz ab geht, nur um dann kurz danach wieder auf die Autobahn zu gehen.

Um fuer Lokale Optimalitaet zu garantieren schauen wir uns die Routen um die Via-Knoten an und nutzen die Heaps der Forwaerts- und Rueckwartssuche um zu testen ob diese Routen selbst schnellste Routen sind.

Die lokale Route ist selbst optimal:

Erst jetzt entpacken wir die Geometrien der Alternativerouten-Kandidaten und ueberpruefen unsere Kriterien auf den detaillierten Routen.

Und zuletzt muessen wir diese Kriterien nicht nur zwischen der schnellsten Route und den Alternativerouten-Kandidaten ueberprufen, sondern sobald wir mehrere Alternativerouten gefunden haben gegen alle paarweise Alternativrouten selbst.

Probier’s aus mit der Open Source Routing Machine - wir sind offen fuer Feedback!

Ein besondere Dank geht an Moritz Kobitzsch der eine grosse Hilfe war und mit seinem Doktor in Alternativrouten mit seinem Wissen extrem helfen konnte!

Mehr Details:

Releasing Turn Restriction Detections

Posted by daniel-j-h on 1 February 2018 in English.

We are releasing 10.4k turn restriction detections located across 5.2k intersections for the OpenStreetMap community.

Map

Raw GeoJSON data


These turn restriction detections were sourced by applying our machine learning computer vision models to Bing Streetside imagery. Mapbox has acquired the right to contribute these detections to OSM.

“Detections” are not really the same thing as turn restrictions. Rather, they are suspected, but unverified turn restrictions. We are planning to manually verify these turn restrictions, and our data team is working through the list, but the first pass produced thousands of candidates, so verification will take a while. We do think these can be useful to the OSM community, who may wish to move faster on improving OSM based on these detections.

To be clear, the Streetside imagery is not available for direct editing by the community at large. However, subject to its terms and conditions, it can be used for personal reference for examining the source of the detections, so we have included URLs to the image associated with each detection.

Examples from the Streetside Imagery API:

Motorway Junction Node Placement

Posted by daniel-j-h on 17 January 2018 in English. Last updated on 6 February 2018.

It isn’t always obvious where to position a highway=motorway_junction node that connects a motorway way to a motorway_link way (also known as an off-ramp or slip road). Over the years, mappers have used three different approaches.

Inconsistent placement of junction nodes can affect turn-by-turn navigation software, particularly instruction timing and rerouting. I’d like to raise awareness about the preferred placement, which is at the beginning of the gore, and explain why the other two approaches are suboptimal.

Throughout this post, I will refer to the term gore. A gore is a wedge that separates a ramp from the main motorway. A physical gore may be a barrier or a grassy area, whereas a theoretical gore simulates this separation using pavement markings and empty space. By “gore”, I am referring to the beginning of the theoretical gore, if present, or alternatively the physical gore.

PS: after writing this diary we had a great discussion. See comments below for the recommended “Kreuz Köln-Süd” mapping style.

Approach 1: Map the ramp node at the exit sign

Rationale: In the majority of urban areas a detailed overhead exit sign is located at the last point where the off ramp maneuver can take place - the physical or theoretical gore.

Drawbacks:

  • Many rural exits (and some urban exits, depending on the state) lack overhead signage. Instead, the only sign is located on the side of the road (at the beginning of the deceleration lane or even earlier), accompanied by a small “Exit” sign at the gore.
  • In many cases, the exit sign occurs well after a lane change restriction (change:lanes) begins (example - note the solid white line). However, lane issues should be remedied through tagging, not geometry alterations.

Example. In this case the exit sign and the theoretical gore are located at the same location. This is the last location where driver can exit the motorway. When the sign board is right next to the gore this approach makes sense.

Example. The exit sign does not coincide with the theoretical gore in this case. The exit sign is positioned much earlier than the gore. In this case mapping the exit from the sign position will not make sense.

Approach 2: Map the ramp node where the road begins to widen

Historically, some mappers have preferred to start the ramp at the point the road widens, mainly for aesthetic purposes.

Rationale: This appears smoother on a map rendering as it more accurately represents the path a driver should take to use the ramp.

Drawbacks:

  • If a driver has passed the point where the road begins to widen but has yet to reach the gore (or the start of chevron markings), it’s too early to reroute the user as if they missed the exit.
  • The exit may have a longer deceleration lane. If the junction node is placed at the beginning of the deceleration lane, the user’s distance to the junction node may differ from signage, creating confusion.
  • Does not accurately reflect the features on the ground.

Example (same junction as Approach 1 example):

Approach 3: Map the ramp node at the gore of the main road and ramp

This is the approach documented on the wiki. It balances the needs of renderers and routers.

Rationale:

  • Allows for precise lane mapping, as the junction node reflects reality.
  • Staggered exit lanes should be modeled using turn:lanes tags instead of separate ways.
  • The junction node reflects the exact point where it is no longer possible to cross back onto the highway, consistent with the practice of mapping a dual carriageway only when there’s a definite separation.

Drawbacks:

  • On rendered maps, the turn angle looks sharper than expected when zoomed way in.
  • Less sophisticated routers may consider the actual turn angle to be much sharper than in reality.
  • In some cases, the gore begins well after a lane change restriction (change:lanes) begins (example - note the solid white line). However, it’s better to add the lane change restriction as a change:lanes tag than to draw the entire exit lane as a separate way.

In approach 2 the ramp node is placed where the road begins to widen. In the same example, if approach 3 is used the ramp would appear like this:

Example: The junction node near the gore where it not possible to change lanes back to the highway.

Although this approach tries to place the junction node close to the gore, it isn’t necessary to make it exactly flush with the gore. The motorway_link would meet the motorway at a 90° angle, which would result in unintuitive rendering. A roughly 45° angle strikes a better balance between the needs of renderers and routers.

Conclusion

Based on this evaluation, we believe the best practice for mapping ramps is (Approach 3) map the ramp node at the gore of the main road and ramp. This follows the wider OSM convention of prioritizing mapping based on the physical barriers, is appropriate for diverse geographies, and aligns with developing work around lane guidance.

As next actions, from now on, the team at Mapbox will fix these ramp geometries by following the convention stated in approach 3 when we come across ramps mapped according to other approaches. It would be great to have everyone follow this approach as well when mapping ramps and make changes to existing ramp intersections that you come across. If there is consensus around this approach let’s update the wiki with further details to promote this practice.

Modellierung von Ausfahrten

Posted by daniel-j-h on 17 January 2018 in German (Deutsch).

Es ist nicht immer offensichtlich wo eine highway=motorway_junction Node, an der ein motorway_link Way abgeht, gemappt werden soll. Im Folgenden betrachte ich die drei am häufigst genutzten Modellierungsmethoden mit ihren Vor- und Nachteilen insbesondere im Hinblick auf die Routenplanung und Navigationsansagen.

Dabei dient folgendes Schema als Grundlage in welchem der “physical gore” die physische Abgabelung der Strasse und der “theoretical gore” den Beginn der Fahrbahnflächenmarkierung darstellt.

Modellierung I: Node am Ausfahrtsschild

In manchen Fällen gibt es ein letztes Überhangs-Schild an dem das Abbiegemaneuver noch getätigt werden kann.

Nachteile:

  • Bei vielen ländlichen, und manchen städtischen, Ausfahrten fehlt ein solches Überhangs-Schild. Stattdessen gibt es ein Schild weit vorher am Strassenrand, gefolgt von einem kleinen “Exit” Schild.
  • In vielen Fällen ist das letztes Überhangs-Schild weit nach einem Spurwechselverbot (change:lanes) aufgestellt (Beispiel mit durchgezogener Linie). Diese Spurwechselverbote haben natürlich keinen Einfluss auf die Strassengeometrie.

Beispiel: hier ist das Ausfahrtsschild weit vor der eigentlichen Ausfahrt. Das Aufspalten des Weges bereits an diesem Schild macht keinen Sinn.

Modellierung II: Node an der Strassenaufteilung

Eine weitere Möglichkeit die highway=motorway_junction Node zu platzieren ist an der Stelle an der sich die Strasse beginnt aufzuteilen. Diese Art zu Mappen kommt vor Allem der Ästhetik beim Karten rendern zu Gute.

Nachteile:

  • Falls der Autofahrer die Ausfahrtsnode verpasst hat ist es dennoch möglich die Ausfahrt zu nehmen und eine erneute Routensuche würde zu früh stattfinden.
  • Die Ausfahrt kann eine lange Abbremsspur haben was dazu führen kann, dass die Distanz, die die Routenplanung berechnet, nicht mit den Schildern übereinstimmt.

Beispiel für eine solche Situation:

Modellierung III: Node an der Fahrbahnflächenbegrenzung

Diese Art der Modellierung ist in der Wiki beschrieben. Dabei wird die Node so platziert, dass ein Autofahrer dort die letzte Möglichkeit zum Abbiegen hat.

Damit ist der Weg frei für ein präzises Spuren-mapping, und eventuelle staggered exit lanes können als turn:lanes getaggt werden.

Nachteile:

  • Auf Karten sieht der Abbiegevorgang steiler aus als er eigentlich ist, wenn man nahe heran zoomt
  • Das Selbe gilt für Router, wobei die meisten Router sowieso nicht einfach die naechste Node zur Winkelberechnung nehmen.

Spurwechselverbote können jetzt mit dem change:lanes Tag gemappt werden, anstatt mit einem separaten Weg.

Beispiel:

Fazit

Basierend auf der Evaluation der oben aufgeführten drei verschiedenen Möglichkeiten der Modellierung und ihren Vor- und Nachteilen empfehle ich den letzten Ansatz: die Node an der Fahrbahnflächenmarkierung zu mappen.

Diese Art der Modellierung folgt der gebräuchlichen OSM Konvention physische Hindernisse zu mappen, trifft zu auf verschiedenste Länder, und macht den Weg frei für präzises Mappen von Spuren.

Sharp Turns onto Ramps

Posted by daniel-j-h on 23 June 2017 in English. Last updated on 26 February 2018.

Here’s a map I got out of our Open Source Routing Machine when I built a validator for sharp turns onto ramps. The idea behind this validator is that turns onto highway ramps should never be sharp (i.e. < 90’ish degree). There is a high chance a turn restriction is missing at those locations.

With these checks enabled I found roughly 30k intersections on the planet. I visualized the results in the map below; the redder the circle, the sharper the turn angle onto the ramp.

Click for interactive map

In the results I found an issue that frequently re-occurred. Consider the situation below: heading east on Goff Mountain Road (the yellow line), there is a ramp on the right that you can legally take. Around the bend, there is another ramp onto the same road for drivers going west on Goff Mountain Road.

If you are driving east on Goff Mountain Road and happen to miss the ramp, re-routing will kick in. When generating a new route past the ramp that you initially missed, OSRM sees in the road network that there is conveniently a second ramp onto the same road. It then basically gives you the instruction to make a nearly 180 degree turn on the the next ramp! A turn restriction onto that second ramp from Goff Mountain Road would fix this issue.

Given that the navigation is able to pass in your bearings data (what cardinal direction you’re already moving in) to OSRM, OSRM won’t try and tell you to make a fast u-turn and try the first ramp again like in the third frame of the gif. Making a turn onto the second ramp isn’t considered problematic though.

There are definitely false positives in the data and cases you will never hit in reality. But there are also missing turn restrictions onto ramps that will result in bizarre instructions given back to re-routing requests.

As usual check Mapillary and OpenStreetCam imagery before making changes. Especially look out for old satellite imagery.

Even though the sharp-turns-onto-ramps validator was just a quick evening hack it would be great to add more static road network property validators that detect situations like u-turns on high-speed roads or sudden changes in road class.

Spitze Winkel beim Abbiegen auf Ausfahrten

Posted by daniel-j-h on 23 June 2017 in German (Deutsch). Last updated on 27 June 2017.

Hier ist eine Karte die ich mit unserer Open Source Routing Machine generiert habe als ich nach spitzen Winkeln beim Abbiegen auf Ausfahrten gesucht habe. Solche Abbiegevorgänge sollten niemals vorkommen und sind ein gutes Indiz für fehlende Abbiegeeinschränkungen.

Hier ist eine Visualisierung; je röter desto spitzer der Winkel.

Interaktive Karte

Ein Problem was immer wieder aufzutauchen scheint ist das Folgende (siehe Bilder unten): aus dem Westen kommend auf der Goff Mountain Road gibt es eine Ausfahrt auf der rechten Seite. Kurz nach dieser Ausfahrt ist eine zweite Ausfahrt für die Gegenrichtung.

Wenn man nun aus dem Osten kommend auf der Goff Mountain Road fährt und die Ausfahrt verpasst gibt einem die erneute Routensuche die Anweisung die nächste Ausfahrt der Gegenrichtung zu nehmen! Hier fehlt die Abbiegeeinschänkung in den OpenStreetMap Daten.

Der unmittelbare U-turn nach der Ausfahrt (drittes Bild im GIF) kann mittels Richtungseinschränkungen vermieden werden - für den Abbiegevorgang auf die Ausfahrt der Gegenrichtung benötigt die Routenplanung die Abbiegeeinschränkung.

Das ist natürlich der ungünstigste Fall und es gibt garantiert false-positives in den generierten Daten und Situationen die niemals solche drastischen Auswirkungen haben.

The Open Source Routing Machine supports bearing constraints for quite a while now. In the following I want to highlight their use-cases and effects on routing.

Suppose you are driving north and make a routing request. An instruction to take a U-turn — since the routing engine does not know your heading — would be undesirable. This is the primary use-case for setting bearing constraints.

Setting a constraint of bearings=0,90 tells the routing engine you are heading north and you want to allow for a variation of +-90 degree around true north. The format is value, range with

  • value in 0, 360 degree for the direction; 0 degree represents true north, 90 represents east and so on
  • range in 0, 180 degree for the allowed variation around the value; a variation of 90 degree allows for +- 90 degree

The bearings=0,90 constraint therefore allows the route to start off towards north +- 90 degree. But not in the opposite direction you are currently heading. Sweet!

Another interesting use-case I came across at FOSSGIS is fire truck routing where your vehicles have to arrive at the right side of the street. Using bearing constraints not only for the route’s start but its end location we can let the routing engine generate optimal routes which guide vehicles to the right side of the street.

Here are two routes from a fire department in Berlin to an arbitrarily chosen location:

  • In orange with a bearing constraint of 0,90 guiding the fire trucks to the location from the northern direction (raw request)
  • In green with a bearing constraint of 180,90 guiding the fire trucks to the location from the southern direction (raw request)

Bearings in action

Of course the use-case for end location bearing constraints is not limited to fire truck routing. The same applies to logistics use-cases or optimizing routes based on the probability of finding parking spots.

In short: providing a start location bearing constraint makes users happy. For advanced use-cases in addition set end location bearing constraints.

Check out the documentation for our beloved Open Source Routing Machine.

Die Open Source Routing Machine unterstützt seit längerem das Einschränken der Richtungen bei der Abfahrt und Ankunft. Im Folgenden zeige ich Anwendungen dafür auf und wie sich das Feature auf die Routenfindung auswirkt.

Angenommen wir fahren in Richtung Norden und machen eine Routing Anfrage. Was wir auf keinen Fall hören wollen ist die Aufforderung sofort eine Kehrtwende zu machen, nur weil die Routing Engine unsere Fahrtrichtung nicht kennt. Und genau das ist der perfekte Anwendungsfall für das Einschränken der Richtung bei der Routenplanung.

Wir können eine Einschränkung der Richtung erzwingen indem wir den Parameter bearings=0,90 setzen, was der Routing Engine mitteilt, dass wir in Richtung Norden fahren und +-90 Grad Variation zu Nord erlauben. Die Option bearings=value,range erklärt sich wie folgt:

  • value in 0, 360 Grad für die Richtung; 0 Grad für Norden, 90 für Osten und so weiter
  • range in 0, 180 Grad für die erlaubte Variation; eine Variation von 90 Grad erlaubt +-90 Grad

Die bearings=0,90 Einschränkung erlaubt es also dass die Route grob in Richtung Norden startet mit einer Variation von +- 90 Grad. Dadurch filtern wir also Routen die gegen unsere aktuelle Fahrtrichtung starten. Perfekt!

Ein weiterer Anwendungsfall den ich auf der FOSSGIS aufgeschnappt habe ist Feuerwehr Routing, wobei die Fahrzeuge auf den richtigen Strassenseiten ankommen sollten. Wenn wir die Richtung nicht (nur) für den Start einer Route einschränken, sondern für das Ende, können wir die Routing Engine benutzen um optimale Routen zu generieren die uns auf die korrekte Strassenseite führen.

Hier sind zwei Routen von einer Feuerwehr in Berlin (im Bild weiter im Süden) zu einer beliebig ausgewählten Position:

  • In Orange mit einer Einschränkung von 0,90 welche die Feuerlöschfahrzeuge aus dem Norden kommen lässt (Anfrage Details)
  • In Grün mit einer Einschränkung von 180,90 welche die Feuerlöschfahrzeuge aus dem Süden kommen lässt (Anfrage Details)

Richtungs Einschränkung

Natürlich ist das nicht der einzige Anwendungsfall für das Einschränken der Richtung. Das gleiche Verfahren kann für die Logistik oder für das Optimieren von Parkplatzsuchen verwendet werden.

Kurz: eine Einschränkung der Richtung für den Start der Route macht Anwender glücklich. Für fortgeschrittene Anwendungsfälle lohnt es sich für das Ende der Route die Richtung einzuschränken.

Hier ist die Dokumentation der Open Source Routing Machine. Einfach mal stöbern!

In which I explain the differences between the exit_to= tag on the highway=motorway_junction node and the more recent destination= tag on the way branching off of the motorway. In addition I will show the need for handling both tags and leave you with a tool for automatic rewriting.

To provide great guidance for our Open Source Routing Machine users we want to include ramp information for highway navigation.

There are two established tagging schemes for highway ramp signage:

  • Using the exit_to= tag on the highway=motorway_junction node where the ramp branches off of the motorway. This tagging scheme was established mainly because the default Mapnik style displayed the tag quite nicely.
  • The more recent destination= tag on the way branching off of the motorway.

Why do we have two completely different tagging schemes? Have a look at the following example where a ramp branches off the continuing motorway:

a . . . b . . d .
          ` . c .

In this case the exit_to tag is placed on the node b together with the highway=motorway_junction tag. So far so good. Now consider the common situation where multiple numbered exits diverge at a single point.

            . c
a . . . b `
          ` . d

Now there are destination signs for the ways bc and in addition for bd. There are workarounds like exit_to:left and exit_to:right tags but as you can imagine these do not suffice for more complex situations.

For these situations the destination tag is the better alternative.

Here in an example for where we need such a workaround.

Mapillary View

Photo by Mapillary user andrewsnow under CC BY-SA 4.0.

The Open Source Routing Machine supports destination= tags for quite a while now. Unfortunately handling exit_to= tags on nodes is not easy to implement for us, since we would have to create the road network representation first in order to derive the ways the destination signage is meant for.

There is a similar problem with stop sign nodes: without direction tag we have to find the nearest intersection to apply the stop sign penalty to the right turns. But in order to find the next intersection we already need the road network. It is simply not compatible with how the routing engine is structures to allow for high-performance parsing at scale.

Unfortunately the exit_to node tag is still quite popular, especially in the US. See the Bay Area as an example:

exit_to

In contrast, the destination way tags:

destination

We clearly have to support both tagging schemes at this point.

This is the reason I spent some time to build a tool for rewriting exit_to node tags to destination way tags where unambiguously possible:

https://github.com/mapbox/rewrite-exit-destination-signage

It is quite conservative, taking care of oneway link roads, destination tags that are already present, making sure the highway=motorway_junction tag is present and more. It prefers correctness over quantity.

For the Bay Area it is able to add roughly additional 800 destination tags in a matter of seconds. A viable pre-processing step for making highway navigation with the Open Source Routing Machine more user-friendly, especially in the US!

Check out the tags’ Wiki pages for more detailed descriptions:

Check out our recent Open Source Routing Machine v5.6 release in case you want to give it a try yourself.

Um den Nutzern unserer Open Source Routing Machine anständige Instruktionen liefern zu können, wollen wir Informationen zu Schildern an Autobahnausfahrten einbeziehen.

Es gibt zwei Tagging Varianten für Autobahnausfahrtsschilder:

  • Den exit_to= Tag, gesetzt auf der highway=motorway_junction node an der der Ausfahrtsweg abgeht. Diese Art des Taggings wurde hauptsächlich verwendet da der standard Mapnik Stil es recht gut aussehen lies.
  • Der neue destination= Tag auf dem Way der Ausfahrt.

Warum brauchen wir zwei grundweg unterschiedliche Tagging Varianten? Hier ist ein Beispiel für eine fortfahrende Autobahn und eine Ausfahrt:

a . . . b . . d .
          ` . c .

Hier kann der exit_to Tag auf der node b zusammen mit dem highway=motorway_junction Tag gesetzt werden. Soweit so gut. Was passiert nun aber bei mehreren Ausfahrten die sich alle an einer einzelnen node auftrennen?

            . c
a . . . b `
          ` . d

Hier wollen wir Ausfahrtsinformationen für bc als auch bd setzen. Dafür gibt es Workarounds wie exit_to:left und exit_to:right, die aber schon bei gering komplexeren Situationen versagen.

Für genau diese Fälle ist der destination Tag die bessere Alternative.

Hier ist ein Beispiel wo ein solcher Workaround verwendet wird.

Mapillary View

Foto von Mapillary user andrewsnow lizenziert CC BY-SA 4.0.

Die Open Source Routing Machine unterstützt destination Tags bereits seit einer Zeit. Um exit_to Tags auf nodes zu unterstützen müssten wir das Strassennetz zuerst aufbauen um dann die Wege zu suchen, für die die Schilder zutreffen.

Wir haben das selbe Problem mit Stop Schildern ohne direction Tag: wir müssten die nächste Kreuzung finden zu der das Stopschild gehört um dann die Abbiegevorgänge zu bestrafen. Aber um die nächste Kreuzung finden zu können brauchen wir bereits das Strassennetz. Diese Annahmen sind grundweg inkompatibel mit den Techniken welche die Routing Engine verwendet um hoch-performant und skalierend OpenStreetMap Daten zu parsen.

Leider ist der exit_to node Tag immer noch sehr verbreitet. Vor allem in den USA. Siehe Bay Area:

exit_to

Verglichen dazu: der destination way Tag:

destination

Damit ist klar, wir müssen beide Tagging Varianten unterstützen.

Aus den Grund habe ich ein Tool geschrieben, welches exit_to node Tags zu destination way Tags umschreibt wenn es eindeutig möglich ist.

https://github.com/mapbox/rewrite-exit-destination-signage

Dabei versteht es Situationen wie Ausfahrten in einer Richtung, bereits vorhandene destination Tags, und andere Grenzfälle. Korrektheit ist dabei das primäre Ziel.

Für die Bay Area kann das Tool etwa 800 destination Tags in wenigen Sekunden hinzufügen. Ein brauchbares pre-processing Tool also das die Navigation für Autobahnen mit der Open Source Routing Machine benutzerfreundlicher macht!

Hier sind die Wiki Seiten der Tags:

Hier ist der Open Source Routing Machine v5.6 Release zum selbst ausprobieren.

Some days ago we released the latest stable Open Source Routing Machine release: version 5.5. In addition to some neat routing engine features I want to highlight two often overlooked improvements in usability.

Docker Images

We now publish pre-build Docker images for the latest releases to Docker Hub. Setting up your own routing engine and running the HTTP API is now as easy as:

wget http://download.geofabrik.de/europe/monaco-latest.osm.pbf

docker run -t -v $(pwd):/data osrm/osrm-backend:v5.5.2 osrm-extract -p /opt/car.lua /data/monaco-latest.osm.pbf
docker run -t -v $(pwd):/data osrm/osrm-backend:v5.5.2 osrm-contract /data/monaco-latest.osrm
docker run -t -i -p 5000:5000 -v $(pwd):/data osrm/osrm-backend:v5.5.2 osrm-routed /data/monaco-latest.osrm

curl 'http://localhost:5000/route/v1/driving/7.436828612,43.739228054975506;7.417058944702148,43.73284046244549?steps=true'

Hooking up the user-facing frontend is a npm install away.

Rendered API Documentation

We now host up-to-date documentation for both the routing engine’s HTTP API as well as for the C++ libosrm library doing the heavy-lifting. This is in addition to the existing documentation for the NodeJS wrapper node-osrm.

Check out our re-worked Landing Page and Wiki:

Hopefully this makes it easier to set up OSRM and experiment with OpenStreetMap-based routing. As always I’m more than glad for feedback!

Vor Kurzem haben wir Version 5.5 der Open Source Routing Machine veröffentlicht. Neben einigen netten Neuerungen zur Routenplanung möchte ich hier zwei Änderungen zur Benutzerfreundlichkeit vorheben.

Docker Images

Wir veröffentlichen jetzt Docker Images für die aktuellen OSRM Versionen auf Docker Hub. Um OSRM aufzusetzen und den HTTP Server zu starten genügt nun Folgendes:

wget http://download.geofabrik.de/europe/monaco-latest.osm.pbf

docker run -t -v $(pwd):/data osrm/osrm-backend:v5.5.2 osrm-extract -p /opt/car.lua /data/monaco-latest.osm.pbf
docker run -t -v $(pwd):/data osrm/osrm-backend:v5.5.2 osrm-contract /data/monaco-latest.osrm
docker run -t -i -p 5000:5000 -v $(pwd):/data osrm/osrm-backend:v5.5.2 osrm-routed /data/monaco-latest.osrm

curl 'http://localhost:5000/route/v1/driving/7.436828612,43.739228054975506;7.417058944702148,43.73284046244549?steps=true'

Ein Interface mit Karte und Geocoding ist jetzt nur noch ein npm install entfernt.

Aktuelle API Dokumentation

Wir publizieren nun aktuelle Dokumentation für die HTTP API und die C++ libosrm Bibliothek, zusammen mit der Dokumentation für den NodeJS wrapper node-osrm.

Hier ist die neue Wiki und Start Seite:

Damit sollte es einfacher sein mit der Routenplanung auf OpenStreetMap Daten zu experimentieren!