Blog post

It’s Time to Give Autonomous Cars an Ethics Lesson

Learn how to train self-driving vehicles to cope with situations like unavoidable accidents and bullying

Updated: August 21, 2023 9 mins read Published: August 02, 2018

Imagine an autonomous car driving down a busy city street. Suddenly, it spots children crossing the road. There’s oncoming traffic to the left, a building to the right and a truck driving just behind.

What should the autonomous vehicle do? Keep going and hit the kids? Or save them by turning into a wall, even if that means hurting the passengers? This mental exercise illustrates the ethical dilemma of self-driving cars. How should they behave in unavoidable crash scenarios? Whom should they save first? And who will be responsible for the damage?

Intellias has put together an overview of autonomous vehicle ethics. Read on to find out how a self-driving car learns ethical principles and what the ethical issues of driverless cars are.

CONNECTED DRIVING WHITEPAPER
How Tier 1s can thrive during technology disruption
Download now

Ethics of autonomous vehicles help the computer to make more accurate decisions

It’s Time to Give Autonomous Cars an Ethics Lesson

Although self-driving vehicles are generally safer than vehicles driven by humans, there’s still a small risk of them getting into accidents. In fact, researchers say that even advanced autonomous vehicles will get into unavoidable accidents. Today, an autonomous car tries to hand control to the driver if something goes wrong. But that’s not what usually happens in reality, since it takes more time for a human to react than for an unavoidable crash to happen. So autonomous vehicle has to take the lead and act on its own — only it doesn’t act in the way humans do.

In stressful situations when a person would panic and react intuitively, an autonomous car can react rationally. It can see possible paths, calculate outcomes, and then select the best one based on all that analysis. An autonomous vehicle can predict crash trajectory alternatives, for example, and choose the one with the least damage. But if there’s no evident best path, how will self-driving vehicles know which path to pick?

The decision-making process of autonomous vehicles is complex and includes analyzing data from various sources.

Decision-making process in autonomous cars

It’s Time to Give Autonomous Cars an Ethics Lesson

Here’s where autonomous vehicle ethics helps. Rules of ethics allow self-driving cars to make better decisions on the road. Thanks to machine ethics, autonomous vehicles can become aware of some moral principles. This means they can judge how to behave in different situations. For instance, the ethics behind self-driving cars can allow them to break the law if it means saving somebody’s life.

Learn how Intellias developed data-driven protocols for advanced driver assistance systems

Read more

With autonomous vehicle ethics come social and ethical dilemmas

In the era of the digital revolution and self-learning systems, human-machine interaction raises new ethical questions. Automated and connected driving is the most recent innovation where this interaction is to be found across the board.

Alexander Dobrindt, Federal Minister of Transport and Digital Infrastructure of Germany

For a car to make decisions on the road, the objects around it should be assigned values. Take a traffic cone, for example. It’s clearly less significant than a human being, which means that a car should hit a traffic cone instead of a pedestrian. But here’s the scary part: What about values among humans? Are the lives of the car’s passengers as important as the lives of other drivers? Whom will the car choose to save in a crash? Pedestrians or passengers?

Different scenarios of a self-driving car’s behavior on the road

It’s Time to Give Autonomous Cars an Ethics Lesson

a. A car chooses to save many pedestrians over one.

b. A car potentially kills its passengers but saves a pedestrian.

c. A car saves a lot of pedestrians by hitting the wall and potentially killing its passengers.

No doubt it’s possible to assign different values to people too. For instance, a car can prefer to save its own passengers’ lives over others’. In fact, this is what Mercedes-Benz wanted to teach its self-driving vehicles to do before it had to pull back after facing public shaming.

Alternatively, a self-driving car can aim to save the young over the elderly. Either way, a vehicle sacrifices a certain group of people while saving another. And this sounds a lot like discrimination. Applied carelessly, a vehicle’s moral system only multiplies the ethical dilemmas in self-driving cars, which can lead to low public trust and increasing safety issues.

A car may make one decision, and that decision might lower the total number of deaths on the road, but it might lower those primarily [for] people aged 18–65 but actually increase the number of deaths that we have for 65-year-olds.

Nicholas Evans, Assistant Professor of Philosophy at University of Massachusetts Lowell
Contact Intellias to teach your autonomous cars how to make the right decisions on the road
Get in touch

The main ethical dilemma of self-driving cars: it’s just another line of code

It’s Time to Give Autonomous Cars an Ethics Lesson
No matter how complicated a crash is, self-driving vehicles will have to decide which way to go and whom to save. The problem is that autonomous cars won’t do this entirely on their own. Basically, it’s the developers who create the vehicle’s moral system. A self-driving car’s behavior in a crash depends on its ethical crashing algorithms. In an accident, a vehicle behaves as it’s programmed to.

At this point it’s clear that a self-driving vehicle’s dependence on lines of code may discourage customers. Honestly, how can people trust autonomous cars that have hidden logic? Also, with an autonomous car’s behavior depending on its software systems, the problem of responsibility arises. Who will be to blame for a vehicle’s bad decision? Who will take responsibility if a car kills someone performing crash optimization?

But don’t worry. The good news is that autonomous cars of the future will probably have either mandatory or personal ethics settings. In other words, consumers will be able to choose their own ethical preferences for their cars. Moreover, this will solve the responsibility problem. The owner will be the one responsible for the car via personal settings. Presumably, drivers who choose a selfish ethics algorithm will be blamed more than those who go with the manufacturer’s default settings.

Find out how Intellias developed over-the-air map updates for connected cars

Read more

The golden rule for autonomous car ethics doesn’t exist

It’s Time to Give Autonomous Cars an Ethics Lesson

So far, there are no unified ethical standards and certifications for autonomous cars. The big Moral Machine study conducted by MIT showed that it’s hard to identify universal ethical values. The moral choices that people made in the MIT survey were different and varied even at a local level. That’s why it’s hard to create a universal ethics of self-driving cars that won’t be controversial.

In the search for universal autonomous car ethics, one study has shown that people support the idea of utilitarian autonomous cars, meaning vehicles that aim to save as many lives as possible. Yet the majority of participants want others to drive autonomous cars and don’t want to own one themselves. The reason is obvious: saving as many lives as possible also means sacrificing the driver’s life if needed. And people want to feel absolutely safe when riding in an autonomous car.

In June 2017, the German Ministry of Transport and Digital Infrastructure developed the world’s first ethical guidelines for autonomous vehicles. According to these guidelines, the protection of human life comes first. And though the guidelines say decisions can’t be standardized, they justify programming that reduces the number of personal injuries.

Moreover, the guidelines say that a car can’t distinguish among individuals based on their age, gender, or physical or mental constitution. Beyond that, the guidelines state that autonomous vehicle technology should prevent accidents before they happen, which means excluding all problematic situations on the road beforehand, including those that cause dilemmas.

The sooner driverless cars are adopted, the more lives will be saved. But taking seriously the psychological as well as technological challenges of autonomous vehicles will be necessary in freeing us from the tedious, wasteful and dangerous system of driving that we have put up with for more than a century.

As Rahwan, Azim Shariff and Jean-François Bonnefon, researchers in the field of autonomous car ethics

How can ethical principles turn into software algorithms for autonomous cars?

Some might wonder whether it’s possible to code ethical behavior into autonomous cars. It is. To create a vehicle’s ethical crashing algorithm, software developers can use a hybrid approach that combines the strong sides of rational, AI, and natural language approaches. Put simply, a vehicle is given a set of rules for how to behave in various situations or a goal it should achieve.

Alternative trajectories for an autonomous car

It’s Time to Give Autonomous Cars an Ethics Lesson

a. Severe one vehicle crash

b. Moderate two vehicles crash

c. Low probability – no crash; high probability – severe two vehicles crash

For instance, the vehicle can try to save children first. Ideally, a rational system for self-driving cars would be created by lawyers, sociologists, and philosophers as well as software engineers. The problem is that by using solely a rational system an autonomous vehicle won’t go very far. It’ll run into a problem as soon as it witnesses an unexpected situation that’s not on the list. And since software developers can’t account for every single road accident, autonomous vehicle ethics needs to be more than a set of strict rules.

This is where AI for autonomous vehicles can help. Artificial intelligence methods allow self-driving cars to act beyond their programmed guidelines. Thanks to machine learning techniques, cars will be able to make the right decisions on the road even in unexpected situations. AI systems will be able to train through simulated crash scenarios and observation of human actions.

AI systems can also learn from feedback on their ethical responses. Though only weak AI is currently on the market, machine learning methods are being successfully applied in autonomous cars. For instance, deep learning is on the rise.

Combined with a rules-based system, machine learning techniques will help autonomous cars to choose the most ethical path. Beyond that, self-driving vehicles should learn how to warn drivers of the choice they’re going to make. Autonomous cars should be able to explain what they’re doing using natural language.

Self-driving vehicles can become victims themselves

It’s Time to Give Autonomous Cars an Ethics Lesson

According to a survey by Goodyear, 41% of participants say they would feel uncomfortable driving alongside an autonomous car.

Research has also found that people worry that autonomous vehicles will lack common sense on the road. Some people even say they will take advantage of self-driving cars and “bully” them.

Sadly, the bullying threat is real. Fearing possible aggressive behavior against autonomous vehicles, Volvo has announced that they will not mark their driverless cars to protect them. Mercedes-Benz has also warned about bullies and is thinking about making autonomous cars more aggressive.

But here’s the problem: some self-driving cars may not interpret an attack against them correctly. Unlike human drivers, autonomous vehicles usually lack context and understanding of social basics. So it’s important to prepare them for various complicated situations on the road, not just for accidents. Luckily, machine ethics allows cars to exhibit moral behavior in any situation.


Modern self-driving cars have many safety features that keep passengers safe. But they aren’t enough. A vehicle has to be prepared for various road situations. For example, it has to know how to behave in an unavoidable accident. In other words, it has to take care of crash optimization, not just crash avoidance. It also should be able to act even in unpredictable situations like conflicts or bullying. Luckily, ethics of autonomous vehicles can handle all these cases. Ethics help self-driving vehicles behave wisely on the road.


No doubt every autonomous vehicle needs an ethics lesson. Contact us if you would like our AI and machine learning experts to create an ethical system for your self-driving car.

Rate this article
No votes Thank you for your vote. 10762 5e1da5b145
How can we help you?

Get in touch with us. We'd love to hear from you.

    I give consent to the processing of my personal data given in the contact form above as well as receiving commercial and marketing communications under the terms and conditions of the Intellias Privacy Policy.

    We use cookies to bring best personalized experience for you.
    By clicking “Accept” below, you agree to our use of cookies as described in the Cookie Policy

    Thank you for your message.
    We will get back to you shortly.