OpenStreetMap logo OpenStreetMap

Rules for Robot/A.I. Mappers

Posted by ImreSamu on 6 October 2016 in English.

background:

Robot Mappers , machine-learning and artificial intelligence (“robot”) techniques ; http://mike.teczno.com/notes/openstreetmap-at-a-crossroads.html

Maybe, in the future we need some ethical suggestions, like

  • “Robot mappers must be designed to assist humanity” meaning human autonomy needs to be respected.
  • “Robot mappers must be transparent” meaning that humans should know and be able to understand how they work.
  • “Robot mappers must maximize efficiencies without destroying the dignity of people”.
  • “Robot mappers must be designed for intelligent privacy” meaning that it earns trust through guarding their information.
  • “Robot mappers must have algorithmic accountability so that humans can undo unintended harm”.
  • “Robot mappers must guard against bias” so that they must not discriminate people.

based on Satya Nadella’s A.I. laws

ps. I added this to the [OSM Wiki:Talk:Automated Edits code of conduct] (https://wiki.openstreetmap.org/wiki/Talk:Automated_Edits_code_of_conduct#Rules_for_Robot.2FA.I._Mappers)

Robot mappers?

Discussion

Comment from SomeoneElse on 6 October 2016 at 12:14

I suspect we’re some way away from worrying about “maximize efficiencies without destroying the dignity of people”. :)

The only “robot mapping” I’ve seen in OSM so far could easily have been labelled “rubbish mapping” - it seems to involve “try and do some basic edge detection and assume that whatever has been found is a road”.

A better approach would simply to ask all mappers - robot, human, feline, canine, whatever - to simply “be honest” - if you’re a robot, say so, and say who your “handler” is (just like with import accounts). If you’re mapping on behalf of a corporation or an organisation, say that too.

Comment from BushmanK on 6 October 2016 at 22:30

The second rule doesn’t actually fit any method, based on neural networks. Simply because neural networks are trained on certain datasets and they can only make decisions based on that training, but nobody can explain exactly which criteria they use to make decisions. So, in case of OSM, only one thing is really required and important - ability to revert any specific changes if the result of those changes seems harmful to data integrity.

Log in to leave a comment