Moral Lab

A training facility for artificial intelligence (AI)

ethics of technology
challenge

What kind of ethical code would people like to see programmed into artificial intelligence and algorithms?

designers

  • Kornelia Dimitrova
    Bernhard Lenger

  • partners

  • Dr. Bart Wernaart, (Fontys Hogescholen)

  • designers

  • Kornelia Dimitrova
    Bernhard Lenger

  • Algorithms and AI make our lives more convenient while at the same time influencing us in ways that often go unnoticed. Assistants such as Siri, Alexa or Google Assistant have introduced the processing powers of algorithms directly to our homes. But is there such a thing as ethical decision-making when it comes to AI and if so, what should that look like? To what extent should AI influence the way we think, act and live? And what moral values should our personal helpers use? The Moral Lab is a spatial research installation designed to collect data about the kind of ethical decision-making individuals would like to see programmed into artificial intelligence and algorithms.

    partners

  • Dr. Bart Wernaart, (Fontys Hogescholen)

  • The MAIN dilemmas

    Upon entering the installation, participants would encounter MAIN (Moral Artificial Intelligence Network), a character created to represent the collective of algorithms in the world. During a ten-minute private audio interaction, MAIN would present the individuals with five ethical dilemmas (in the domains of investment, education, recruitment, health advice and CCTV facial recognition) and ask them for their views. At the end of the session, MAIN summarised the lessons it had learned from the participant.

    The MAIN dilemmas

    Upon entering the installation, participants would encounter MAIN (Moral Artificial Intelligence Network), a character created to represent the collective of algorithms in the world. During a ten-minute private audio interaction, MAIN would present the individuals with five ethical dilemmas (in the domains of investment, education, recruitment, health advice and CCTV facial recognition) and ask them for their views. At the end of the session, MAIN summarised the lessons it had learned from the participant.

    Algorithms and AI make our lives more convenient while at the same time influencing us in ways that often go unnoticed. Assistants such as Siri, Alexa or Google Assistant have introduced the processing powers of algorithms directly to our homes. But is there such a thing as ethical decision-making when it comes to AI and if so, what should that look like? To what extent should AI influence the way we think, act and live? And what moral values should our personal helpers use? The Moral Lab is a spatial research installation designed to collect data about the kind of ethical decision-making individuals would like to see programmed into artificial intelligence and algorithms.

    Responses and results

    The relationship between artificial intelligence and human beings, and the ethical implications of that relationship in particular, was central to the research project. Visitors responded with both fascination and eagerness as well as disgust and fear. Reactions ranged from an enthusiastic ‘amazing that I am finally asked how I think machines should work’ to the more apprehensive ‘I don’t think machines should be able or allowed to make such decisions for humans.’

    Reach

    The Moral Lab was developed in collaboration with Dr. Bart Wernaart of Fontys Hogescholen, Eindhoven. During Dutch Design Week 2019, over 700 visitors participated in the research project. Furthermore, The Moral Lab and social designer Bernhard Lenger headlined the VPRO television program De Toekomstbouwers.