Time: 10:45 – 12:00
Artificial Intelligence, like all technology, enhances human powers to do good or bad. But unlike earlier technologies, this one affects the power at the very center of our self-conception: the ability to choose what to do. We want to make better choices, but we also want those choices to remain ours. Accordingly, we face two distinct ethical challenges. The first is to ensure that AI is not too human: that the judgments the machines enhance aren’t the ones that express our unreason and prejudice. The second is to make sure that AI is human enough: that we preserve the ability to understand why the machines are doing what they are, and continue to be in a position to exert some form of control. In this panel, we will expand on this tension, and consider questions such as “with what moral framework do people evaluate AI decisions?”, “how does and should this differ from a normative ethic we attempt to apply?”, “how can we go from high-level normative principles to an ‘AI code of ethics’?”, and “how to support rigorous design of certified AI systems?”.
Moderator : Zoltán Szabó