I’m happy to release RoboSat v1.2.0 — RoboSat is an open source end-to-end pipeline for feature extraction from aerial and satellite imagery seamlessly integrating with OpenStreetMap for automated dataset creation.

This release is powered by major community contributions: state of the art Lovasz pixel-wise segmentation loss, better metrics, a road extraction dataset creation handler, and bug fixes! Thank you to our contributors!

v1.2.0 is also the first release shifting from a Mapbox sponsored development process to a community owned development process (after I quit Mapbox end of last year). New features, bug fixes, training and prediction all happens on my personal GPU rig now.

machine learning open gpu rig

6 x GTX 1080 TI. Yes, the LEDs are necessary - they make it go fast

Here is an overview of what you will find in the v1.2.0 release

The pre-built docker images are now the recommended way of using robosat:

Here is an example for a model trained on an automatically generated road dataset

Here is an example for the new Lovasz loss after the first few epochs already; 80cm / z17 aerial imagery for Bavaria, Germany

And an example of where we fall short. For the following location OpenStreetMap contains buildings robosat did not detect in the aerial imagery because they are construction sites. Should robosat flag construction sites as buildings, too? Should OpenStreetMap not map construction sites as buildings? Or is the imagery outdated? These questions are defining our boundaries of what we can detect.

Here is a preview of what I am currently playing around with for potential future releaes

These features all need to be benchmarked wrt. segmentation improvements, but also model complexity, training and prediction runtime, and similar trade-offs. Stay tuned!

As usual hit me up for feedback be it at OSM hack weekends, on tickets, IRC, or the OSMUS Slack. Also check out previous diary posts about robosat:


Comment from maning on 1 June 2019 at 11:44

Yay new release! I definitely give the docker build a try!

Comment from Bence Mélykúti on 3 June 2019 at 07:59

This looks exciting! I’ve researched building footprint detection a little, there are commercial providers there. Are there detectors that output polygonal building footprints? It’s a fair guess that most footprints will be polygonal, even rectangular, and I’d like my detector to know this a priori.

Comment from daniel-j-h on 3 June 2019 at 19:07

I implemented polygonization for the parking use-case we had.

It’s implemented in the rs features tool and can be used after you got the model’s predictions (probabilities you can see above) and converted (potentially multiple - for ensembles) probabilities to masks with the rs masks tool.

Check out the robosat readme and the linked diary posts above - they go a bit more into detail how the pipeline works before and after the prediction stage.

Here is the robosat parking polygonization - building are more or less the same and in fact I’m using the parking handler as a building handler. It’s a bit ugly if you want to handle edge cases such as (potentially nested) (multi-)polygons but oh well.

The polygonization can definitely be improved by

Check out their work - it’s quite nice but they also run into edge cases and design trade-offs

Happy to guide you along if you want to work on this in robosat or have ideas.

Comment from Bence Mélykúti on 5 June 2019 at 07:44

Thank you for your very detailed answer! It’ll take time to work through it.

Comment from daniel-j-h on 8 June 2019 at 13:02

Follow up post running robosat v1.2 on all of Bavaria’s 80 cm aerial imagery is here

Comment from outofabluesky on 25 June 2019 at 18:35

Daniel, you’ve done a solid service taking this technology forward. I commend you.

Log in to leave a comment