Artificial Intelligence · 2018-10-01

Can AI understand morality and ethics? – AI

This article was 1st published on our sister Site, The Internet Of All Things.

By: Richard van Hooijdonk

Artificial intelligence knows whether your smile is real or fake. It can predict your online behavior and sell that data to hungry marketers. One day, artificial intelligence could even be smarter than you – and this is just the tip of the iceberg. The AI market is set to reach $190 billion by 2025, and this technology is being applied almost everywhere. From Facebook’s newsfeed toTesla’s vehicles, from courts to hospital rooms – people rely on AI to make important decisions. And as machines start to control our lives, it has become apparent that we did not really think this through.

What if one day humans are not the smartest species on Earth? What if Skynet becomes a reality, and no Terminator can save us? What if Elon Musk’s prediction that AI is “an immortal dictator” comes true? Super intelligent machines could, for example, take control of a nuclear launch system and rain missiles on cities. Or how about private, proprietary algorithms that could accidentally reflect existing social biases and deny jobs, education, and justice to people? One way to solve these problems is to make machines more like humans, and teach them ethics.

As Rosalind Picard, the director of the Affective Computing Group at MIT, says, “The greater the freedom of a machine, the more it will need moral standards.” Although this seems like a great idea, it creates more questions than answers. Ethics aren’t a simple line of code, but a complex system of values that even humans can’t fully agree on. So why should we teach AI this imperfect system, and what would the morality of artificial intelligence even look like?

artificial intelligence and ethics 

Moral dilemmas

Experts agree that immoral AI is a far bigger threat to people than AI guided by human morality. An ethics-driven machine will act within the boundaries we set, but AI without any value system could make disastrous decisions. “If the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm,” write Jane Zavalishina and Dr Vyacheslav Polonski.

This is true for AI in other sectors as well, but the question remains: how do we convey complex values in lines of code? And even if that’s figured out, we need to decide whose values we’ll install in these machines. People are guided by multiple, often competing moral systems – about which they fundamentally disagree with others. Which one should guide AI, and on which issues?

Another concern is that AI could be unintentionally corrupted by the algorithms that amplify racial and gender biases. Scandals like the unintentional racism of Apple’s face recognition technology and Twitter’s bots are illustrative of this issue. And despite all these challenges,  people aren’t allowed to explore and understand the inner functioning of such algorithms. The proprietary right of private companies seems to be more important than the protection of citizens. But far from complicated questions, some scientists see these issues in a simpler way.

Asked how a driverless car will react if it has to choose between hitting two kids or an approaching  motorbike, Jaguar’s Amy Rimmer says: “I don’t have to answer that question to pass a driving test… So why would we dictate that the car has to have an answer to these unlikely scenarios…?” But not all scientists share her opinion, and instead, they’re working to find ways to teach ethics to AI.

For the rest of the article, click here.


 

Click here to opt-out of Google Analytics