The conscience of artificial intelligence
The question shot through the mind of Professor Christoph Lütge as he drove in a highly automated car on the A9 from Munich to Ingolstadt for the first time in 2016. Lütge couldn’t stop thinking about these questions.
The 49-year-old professor of business ethics at the Technical University of Munich (TUM) has been researching how competition promotes corporate social and ethical responsibility for the past nine years. Before his test drive on the A9, he had had only a casual acquaintance with artificial intelligence (AI). It quickly became clear to him that AI raises a number of ethical questions: who’s liable if something goes wrong? “We must face up to these challenges, whether AI is used for diagnosing medical findings, fighting crime, or driving cars,” says Lütge. “In other words: we need to address the ethical issues surrounding artificial intelligence.”
Ethics of autonomous driving Autonomous driving is a particularly topical and difficult field, because it very quickly moves into the realm of human lives being at stake. For example, what should the AI algorithm do if the brakes fail and the fully loaded vehicle can either collide with a concrete barrier or drive into a group of pedestrians? “These are typical dilemmas that are explored by social scientists,” says Lütge. It will indicate whether the autonomous driving functions were switched on at the time of the crash. This, of course, gives rise to questions concerning data protection. Despite all the challenges, the scientist is convinced that autonomous vehicles will make traffic safer. They will be better than humans, because beyond not getting tired or losing focus, their sensors also perceive more of the environment. They can also react more appropriately: autonomous vehicles brake harder and evade obstacles more skilfully. Even in normal road traffic situations, they will ultimately outperform people. Lots of AI competence at the Munich location So there are many exciting questions for Lütge to chew on at his new Institute for Ethics in Artificial Intelligence at the TU Munich. But of course Facebook is interested in the scientific results—it is, after all, one of the first research institutes to get started in this field. At the new research institute, these skeptics will also find a hearing: We want to bring together all the important players to jointly develop ethical guidelines for specific AI applications. The prerequisite for this is for representatives from the worlds of business, politics, and civil society to engage in dialog with each other, says Lütge.
For the research on ethics in AI, the scientist wants to form interdisciplinary teams to investigate the ethical salience of the new algorithms: “Technicians can program anything,” says Lütge. “But when it comes to predicting the consequences of software decisions, you need the input of social scientists.” That’s why he wants to form interdisciplinary teams, with each tandem consisting of one employee from the technical sciences and one representative from the humanities, law, or social sciences.
EU High-Level Expert Group: Ethics Guidelines for Trustworthy AI (April 2019)
In setting forth ethics guidelines, a High-Level Expert Group on artificial intelligence aimed to create a framework for achieving trustworthy AI. The framework addresses concerns and fears of members of the public and aims to serve as a basis for promoting the competitiveness of the EU across the board.
1. Respect for human autonomy
AI systems should not unjustifiably subordinate, coerce, or deceive humans.
Feb 07, 2020 at 10:06