OpenStreetMap

Visualizing OSM.org's Map Views

Posted by tyr_asd on 4 September 2016 in English. Last updated on 19 September 2016.

OpenStreetMap’s standard map layer is used by many people each day. OSM even provides a dump to the access logs from which one can see which parts of the world is viewed how many times for each day. Inspired by Lukas Martinelli’s work on Parsing and Visualizing OSM Access Logs, I’ve worked on a web app to visualize OSM’s tile access logs. Contrary on Lukas’ approach, I wanted to focus on an accurate representation of the data and wanted to make something works for the whole (Mercator-projected) world.

I’ve ended up with a viewer that takes an (uncompressed) log file from planet.osm.org and produces a two-dimensional histogram for each zoom level: For example, at zoom level 6 in the viewer each pixel on the viewer represents the number of accesses of the corresponding osm.org-tile at zoom level 14. That’s 8 zoom levels further in – or, put another way, each 256x256px² osm.org-tile is represented by a single pixel in the visualization.

The number of accesses of each tile is represented by a logarithmic color palette:

screenshot osm-tile-access-log-viewer

You can play with the tool at http://osm-tile-access-log-viewer.raifer.tech, the source code is found on github and openly licenced (MIT), of course.

With this one can for example compare map views before and after a specific event, for example the recent earthquake in central Italy:

map of map views before and after the recent earthquake in central Italy

But one can also see some interesting artifacts in the data, for example the large amount of tile views around null island or those (to me inexplicable) “comet tail” shaped patterns at some Russian cities. Do you have an idea where these artifacts stem from?

Making-of

warning: the rest of this post will be a bit more technical

What bugged me a bit at first was that my initial implementation was quite slow and made the webapp unresponsive. On my machine the first version took about 40 seconds for the initial processing step (between dropping the log-file onto the page and the first displayed results), which is quite a lot! Meanwhile those calculations were blocking the main UI thread and even causing this nasty browser-popup to appear:

page unresponsive warning

So, what can we do about that? As always, optimizing this kind of stuff starts with some profiling and goes through multiple iterations of optimizing and refactoring with more profiling in between. In the end, I managed to cut the time down from 40 seconds to a mere 9 seconds in the current version:

  • 40 s – initial version
  • 24 s – low hanging fruit
  • 15 s – optimized parser (ArrayBuffer)
  • 13 s – default rbush accessors
  • 14 s – web worker to render tiles (1 worker)
  • 14 s – web worker to render tiles (4 workers)
  • 9 s – parsing in own (single) thread (4+1 workers)

Let’s go though each of these steps, but let’s start with a short overview of the code structure:

Code overview

The code of the visualization isn’t very intricate, it basically just parses the tile log files (which are txt files containing pairs of tile-coordinates z/x/y and their respective view counts, see below), put’s them into a spatial index (I’m using rbush) and finally grabs the data from the index whenever a tile is requested to be rendered. (Then, the rendering of the tiles is just some pixel-pushing onto a canvas which is quick and wasn’t an issue that I had to look much into here.)

Here’s what the access logs look like: (this goes on for ~6,000,000 lines, or about 100MB of data)

13/4316/2511 20
13/4316/2512 18
13/4316/2513 16
13/4316/2514 14

low hanging fruit

flame graph of first profiling session

That’s the flame graph of the first profiling session. There are clearly two distinct processing steps, one relatively flat calculation, talking about 25 seconds and another more recursive which took about 10 seconds. The second portion is quickly identified as rbush building up it’s indexes (which is already pretty much fully optimized, I’d say). But what really stoked me was that the other part corresponding to the following few lines of code took up much more CPU:

    data = data.split("\n")
    data = data.map(function(line) {
        return line.split(/[ \/]/).map(Number)
    })

Pretty basic string operations, as it looks at first. Looking at the profiler again reveals that, of course, there’s a regular expression (/[ \/]/) in a hot loop and converting strings to integers using the Number constructor isn’t the fastest one can do either.

Getting rid of those two results in the first performance win:

line = line.split(' ')
var coords = line[0].split('/')
return { x: +coords[1], y: +coords[2], zoom: +coords[0], count: +line[1] }

optimized parser (ArrayBuffer)

Now, we’re still opening a function scope for each line in the input data and are working with relatively costly string-operations such as split and the + operator (to convert strings to numbers). Getting rid of that was quite fun and resulted in the biggest performance gain after which parsing was 90% faster than at the beginning!

What I ended up doing was to implement a custom parser that works on the raw byte-data (using ArrayBuffers), presuming that the log files are well structured. In its heart is a for loop that walks over all bytes of the data and manually constructs the data:

for (var i = 0; i<view.length; i++) {
    switch (view[i]) {
    default:
        currentInt = currentInt*10 + (view[i] - 48 /*'0'*/)
    break;
    case 10: // '\n'
        data.push({ x: currentCoords[1], y: currentCoords[2], zoom: currentCoords[0], count: currentInt })
        currentCoords = []
        currentInt = 0
    break;
    case 32: // ' '
    case 47: // '/'
        currentCoords.push(currentInt)
        currentInt = 0
    break;
    }
}

One interesting line to note is currentInt = currentInt*10 + (view[i] - 48 /*'0'*/): whenever we don’t see a separating character (newline, space or /), we assume that it must be a numeral, whose value we can get by subtracting the ascii code of 0 from.

default rbush accessors

The next optimization is a rather small one, but one I came across after the recent 2.0.0 release of rbush: apparently, it’s faster to access named attributes of a javascript object rather than elements of a javascript array. Changing the parsed data output to something that can be digested easily by rbush shaved off a few more seconds of the preprocessing.

web workers!

Even after all those optimizations, the calculations (even though they are relatively quick by now), are still blocking the UI. That isn’t a big deal during the initial processing, but the main-thread implementation means that rendering of the histogram tiles also blocks the browser. And even though each rendering is quick (typically only a couple up to a few tens of milliseconds), these small interruptions can add up significantly especially in situations when one pans around quickly or zooms in or out. It’s a bit hard to see in the gif below, but trust me: it feels quite laggy!

laggy zooming/panning

The only solution to this issue is to do rendering in a separate web worker thread. The implementation is a matter of refactoring the data parsing plus rendering code into a web worker and making sure that the returned data is a transferable buffer object. Using a single web worker, this is a bit slower than the non-threaded version, but not too much.

multi threading, first try

When we run a web worker anyway, why not multiple in parallel? That should make rendering of the tiles even faster, right? Well, not really in a naive approach: As every worker needs to have its own spatial index and there’s no way to effectively split the input data into distinct chunks that can be rendered independently later on, the total time with 4 workers is basically the same as with a single one (the overhead of having to duplicate the input data eats up any later gains in faster building up of the indices).

multi threading, second try

Doing multi threading properly in this case is a bit of a larger refactoring, but the effort is worth it in the end with additional ~30% faster processing and even smoother map panning and zooming.

Here, I’ve split the data parsing into a separate web worker which runs single-threadedly (this could in principle also be parallelized, but it’s not worth the effort in my opinion, as this step is quite quick with 2 seconds already – but, potentially one could shave of another second or so). The results of this parsing are then divided up into buckets of transferable ArrayBuffers (which are always important when working with web workers) and distributed among the rendering workers.

That’s how deep I dared to explore this rabbit hole of code optimization this time. I hope you liked my adventure. ;)

Discussion

Comment from yvecai on 4 September 2016 at 20:35

Really Impressive work !

About these “cornet tails”, what about air traffic lanes ?

Yves

Comment from Geonick on 4 September 2016 at 21:19

Cool visualization! You may also want to follow my Twitter bot “Trending Places” and look at it’s homepage. Som time ago I also wrote a blog post about early findings of OpenStreetMap standard map usage.

:Stefan

Comment from baditaflorin on 6 September 2016 at 14:43

Nice work, and i love the technical part. More tech explanations :D But the same, first the everyday explanation, and then the tech part

Comment from imagico on 7 September 2016 at 08:41

The ‘comet tail’ effect - could it be that this has something to do with the order in which map frameworks requests the tiles as you change the view?

Comment from joost schouppe on 7 September 2016 at 09:27

The comet tails: maybe a viewer popular in Russia that “flies” to your search query; similar to what Google Earth does.

Also: very very very cool.

Comment from tyr_asd on 7 September 2016 at 11:08

Now, you can click on the map to get the actual values and a preview of the respective tile:

Comment from d1g on 9 September 2016 at 20:02

Thank you tyr!

trolleway published report using NextGIS platform some time ago, but python/PostGIS scripts are available for anyone to use: https://github.com/trolleway/tile_logs_vis

cornet tails

I don’t think that “cornet tails” contributed by legitimate OSM users.

I haven’t seen an osm service that uses smooth transition on osm.org tiles…

If user agent is is not Google Earth then I’d rather investigate logs more seriously around such regions but with user agents times and ips.

Comment from tyr_asd on 10 September 2016 at 10:34

Oh, I wasn’t aware of trolleway’s implementation. Thanks for pointing it out!

Comment from tyr_asd on 17 April 2017 at 07:40

PS: For those interested in the technical implementation details: Recently I’ve switched out Vladimir Agafonkin’s rbush library with his kdbush implementation, which offers even faster indexing, resulting shaving off another few seconds of startup times. Awesome stuff!

Log in to leave a comment