Posted by bo_hot on 16 December 2021 in English. Last updated on 17 December 2021.

Hi all,

Some of you may remember earlier this year we conducted an experiment to compare traditional mapping with ai-assisted mapping. Below is our summary of findings and the full report for those who may be interested. We hope this experiement will be the start of the conversation of how we can ethically and responsibly introduce AI augmented mapping workflows into HOT’s work in 2022.

Humanitarian OpenStreetMap Team, comparison of traditional digitizing of building features in OpenStreetMap of machine learning assisted building digitization

Key findings:

  • Although most participants were new to AI-assisted mapping, the majority of participants were open and likely to integrate it into their workflows.
  • For beginner mappers, AI-assisted mapping drastically increased their mapping speed, but had no significant effect on their quality
  • For advanced mappers, after a small mapping slow-down, AI-assisted enabled more efficient mapping without impacting their quality
  • Open models offer significant potential impact and value for humanitarian response
  • More data created through AI-assisted mapping may exacerbate the ‘validation bottleneck’

In the last 10 years, the use of AI/ML in the geospatial sector has boomed. Private sector, academic and nonprofit organizations alike have been investing significant thought, time and resources into exploring and testing the potential and possibility of how AI/ML can augment and amplify current GIS workflows.

Unfortunately for the open mapping community, a ‘go fast and break things’ approach has done exactly that, often coming at significant cost to the project and the community. As a result, open mapping communities are reluctant to allow unchecked AI/ML to roam free in the world that is OSM - created and crafted by countless hours of dedicated human mapping.

As the future approaches, so too does the intersection of mapmakers and AI-augmented mapmaking. With new AI models and datasets being generated daily, the pressure builds to find a middle ground where AI can assist, augment and amplify the dedicated map makers in an ethical and responsible way that protects the quality, integrity and value of the map. Our experiment set out to leverage collective intelligence to seek a point of convergence, rather than collision.

By understanding key concerns of the community and carefully integrating them into experiment design, we explored an agreed set of assumptions that could be objectively tested. Stakeholders, users, contributors, technologists and map makers came together with the joint intention of finding a path forward, collectively.

We learned that AI can assist and amplify the efforts of mappers to produce more map data. However, this comes with a condition: AI-assistance amplifies the speed of map data creation, but does not significantly improve data quality (nor does it worsen it).

Amplifying the efforts of an early journey mapper who has yet to learn the importance of map data quality, will obviously create more data, but at beginner levels of data quality. This increases the workload of human data validators and as such should be carefully integrated alongside data quality education.

For advanced mappers we learned that new tools initially cost time, but not quality. Advanced mappers who have spent years refining their craft have to redirect well formed pathways. However, advanced mappers understand the importance of data quality and therefore prioritise producing quality data even if it means taking more time. They are less likely to fall for the temptation of accepting lower quality machine predictions for the sake of speed. For advanced mappers, mapping takes time and attention, two traits they were not immediately willing to defer to the machine.

Through the creation of an open model for gap detection/completeness, 510/Netherland’s Red Cross demonstrated that AI can be accessible to all, especially during times of disaster. This was practically demonstrated when an open model developed for this experiment was used to respond to a typhoon in the Philippines to predict the impact of the typhoon on the local population.

Our acceptability survey showed us that people are open. Open to trying out something new and open to adopting new ways of working. Open to experimenting and exploring and understanding how we can go slow and get it right. Together we benefited from collective intelligence and from community intelligence, which allows our community to take the results forward into the perceptions and assumptions that have been often held but rarely tested. This allows all actors in the community to find an ai-assisted road forward, together.

For the full design, methodology and results you can read the full study report here >

The full NESTA Collective Intelligence Report can be found here >

A final appreciation to both NESTA for the grant to fund this study and to all the volunteers who participated in our study mapathons!

Location: 74560, Auvergne-Rhône-Alpes, Metropolitan France, 74560, France


Comment from rab on 16 December 2021 at 17:04

where can one review the map data that has been created?

Comment from bo_hot on 4 January 2022 at 10:16

@Rab, we had to shut down the OSM mirrors we were using to save costs, however, we will be spinning them back up in a month. When we do, we can share the links so you can access the data.



Comment from G1asshouse on 9 January 2022 at 15:40

As a user, the biggest frustration I have had with the AI assisted mapping programs (project? output?) is the inability (or difficulty) in reporting false detection or detentions based on old map data. I do not know if this was an issue with your project. Another issue, though this is likely personality dependent, I can begin to feel forced to map what the AI has identified instead of feeling free to map what I wish to map while being assisted by the AI. The second issue is far less important.

Comment from amapanda ᚛ᚐᚋᚐᚅᚇᚐ᚜ 🏳️‍🌈 on 5 February 2022 at 19:14

You compare people using regular iD to Facebook’s RapiD tool, and found that experienced mappers map slower with iD, and with less accuracy, whereas beginner mappers map faster & with better quality with RapiD.

Any reason you didn’t compare JOSM to RapiD?

Comment from bo_hot on 7 February 2022 at 16:30

Thanks for the question on the JOSM vs RapID.

For the findings, we found that overall, advanaced mappers were faster and about the same level of quality using RapID when compared to ID, after some initial learning/adjustment.

For beginners, we found that they mapped much faster with RapID, but with no significant impact on quality.

The main reason we didn’t use a RapID/JOSM comparison was because most beginner mappers had not used JOSM before, which created barriers to being able to complete the mapping tasks and we really wanted to compare the effects of AI-assistance on both advanced and beginner mappers.

However, you raise an interesting idea for future reserch, perhaps if a group wanted to focus only on advanaced mappers, a comparison between JOSM traditional and JOSM mapwithAi plugin would make for an interesting exploration. Great suggestion!

Log in to leave a comment