“What should the self-driving car do?” The MIT Media Lab’s Moral Machine website poses this question in one of a series of games inviting players to make an ethical choice between two suggested scenarios.
Click right and the driverless car with brake failure rams straight into a pedestrian, killing her. Click left and the car swerves and hits a concrete barrier, killing the girl in the car.
It’s a grimly absorbing way of passing time, yet such thought experiments have serious implications for almost everybody living in developed societies over the next decade as Artificial Intelligence (AI) takes an increasingly important role in our everyday lives. Robotic intelligence will be entrusted with ever more complex tasks in years to come, from driving us to work or selling us insurance to teaching kids algebra and developing cures for cancer.
Smarter than humans
Machines are becoming so smart so fast that many researchers predict they will reach human intelligence – called artificial general intelligence (AGI) – by the 2030s. Within our lifetimes, they may become smarter than us, achieving what is known as artificial super intelligence – or the singularity. Such a trajectory holds breath-taking possibilities for benefiting humankind – potentially even eradicating poverty and disease – yet also opens a Pandora’s box of philosophical, moral and legal dilemmas.
Machines are becoming so smart so fast that many researchers predict they will reach human intelligence by the 2030s
How to program AI to make correct moral judgements in life-or-death situations? How to prevent racial, gender and cultural biases from creeping into AI systems? And a question from science fiction is today being asked in dead earnest: how do we know that machines won’t turn against us once they’re smarter than we are? Scientist Stephen Hawking warns that artificial intelligence “could spell the end of the human race”.
Amid such concerns, another question arises: who is keeping an official eye on AI? The short answer is pretty much nobody. “So far,” says the World Economic Forum in a report, “AI development has occurred in the absence of almost any regulatory environment.” Governments apply standards for the end-products of AI technologies –requiring autonomous vehicles to meet the safety standards applied to conventional cars, for example – but have virtually no rules in place regulating the development of the AI itself.
Who is keeping an official eye on AI? The short answer is pretty much nobody
There are isolated cases of official AI commissions – such as a German body that last year produced ethical guidelines for driverless vehicles – but we have yet to see an overarching national AI regulatory framework, let alone an international one.
Setting ethical standards
Amid both a lack of official oversight and growing recognition of the need for AI policymaking, tech giants and academic institutions have stepped into the breach – taking the first steps towards charting ethics and safety standards for robotic intelligence. Microsoft, Google, IBM, Facebook and Amazon joined forces with universities, non-governmental organisations and other corporations to launch the Partnership on Artificial Intelligence to Benefit People and Society in 2016 with the goal of developing “best practices in the research, development, testing and fielding of AI technologies”. The Future of Life Institute, an organisation devoted to protecting humanity from existential threats, has won Hawking’s backing for its “Asilomar AI Principles” – a set of guidelines for research to ensure AI is beneficial to humanity.
In 2017, the non-profit Knight Foundation spearheaded a $27 million Ethics and Governance of Artificial Intelligence Fund, tapping the MIT Media Lab and Harvard University’s Berkman Klein Center for Internet & Society to lead the initiative. Months later, Google’s AI unit DeepMind established the DeepMind Ethics & Society research unit, comprised of employees and outside fellows such as philosophy professor Nick Bostrom – founding director of Oxford University’s Future of Humanity Institute – with the aim of developing ethical solutions to societal impacts of AI technologies.
Such initiatives raise the obvious question of whether we can trust profit-seeking (and famously secretive) corporations such as Facebook and Google to police technologies under their own development. Beyond this lie deeper questions about the very feasibility of establishing universal ethical standards for AI. Autonomous systems in charge of split-second decisions involving human life – such as who lives and dies in a car crash – need to be programmed to carry out decisions based on human ethical values. But whose values are we speaking of? Across cultures, religions and even communities there are varying perceptions of moral imperatives and hierarchy.
Across cultures, religions and even communities there are varying perceptions of moral imperatives and hierarchy
MIT’s Moral Machine seeks to create a “crowd-sourced picture of human opinion on how machines should make decisions when faced with moral dilemmas”. Yet it is far from clear that such an approach can lead to globally acceptable or even philosophically cogent norms. Feedback will often be governed by emotional responses, cognitive biases and cultural inputs – all elements that researchers agree should not go into the decision-making processes of AI systems.
Life under robot law
Moreover, individual humans are themselves a bundle of moral contradictions, with ethical compromise the daily norm rather than exception. Yet the “fuzziness” that is part and parcel of human interaction is simply not an option in establishing AI ethical standards. And therein lies the rub. We face the paradox that robots must be held to higher standards than humans are. But in the task of creating moral machines it is we – imperfect and contradictory humans – who face the urgent responsibility of teaching robots a set of universally-sanctioned moral values that do not yet exist.
And suppose that day eventually comes. Are we certain that we will be willing to shed our moral fuzziness in favour of the absolute moral judgements of ethically pure robots – smart enough to monitor every breath we take? Perhaps we should consider not only that machines will one day turn evil, but also whether they may become too good. Will life under the benign gaze of super-smart angelic robots not become a little stifling? Are we willing to abandon the delicious pleasure of some level of transgression?
Perhaps we should consider not only that machines will one day turn evil, but also whether they may become too good