Home > Ethical and legal issues

III. Ethical and legal issues related to AI

1. Origins of the laws of robotics

Very quickly, robots were considered by humans as a potential danger. The hypothesis of a robot rebelling against humans was raised by Stanley Kubrick in his 2001 film Space Odyssey in 1968, but at the time this was just science fiction.

Isaac Asimov had enacted the three laws of robotics in order to calm the fears of the ordinary man towards the "machine". He exhibited them for the first time in his new vicious circle in 1942:

First law A robot can not harm a human being or, remaining passive, leave this human being exposed to danger.
Second law A robot must obey the orders given by human beings, unless such orders contradict the first law.
Third law A robot must protect its existence to the extent that this protection is not in contradiction with the first or the second law.

These 3 laws were later included in the film I, Robot by Alex Proyas in 2004.

Another example of robots becoming autonomous and rebelling against humans is also appearing in Larsström's TV show Real Humans in 2012.

Person of Interest, a television series, also evokes the dangers of a machine that becomes autonomous.

Thus, although science fiction, these three laws are still recognized by the scientific community.
Asimov was well aware of the limits of his three laws, especially since he had designed them to take advantage of their weaknesses. Quite quickly, as early as 1950, he pointed out certain limits, especially concerning the first law that protects the human as an individual and not as a species. Indeed, it would be better to logically sacrifice an individual if it could save a larger group, if not the whole of humanity. In spite of this early realization, it was not until 1985 that he incorporated this idea by modifying the laws. He added to the three laws a zero law which placed the interest of humanity above that of a single individual:

Law zero a robot can not harm humanity or leave humanity without assistance.

If Asimov was proud of his laws of robotics, it was because they represented for him a moral and deontological code, established not only for robots and their creators, but also and especially for all scientists.

2. The legislative aspect

a) Need for ethical reflection and legislation

Because of the risks mentioned above, it is important to legislate with respect to AI.

LMilitary robotics is booming, armed drones are becoming more widespread and becoming more and more autonomous. The three laws of robotics are here to remind scientists of their responsibilities. Killer military robots can not, by definition, conform to the first law. In this context, ethical reflection is therefore necessary to prevent robotics from developing blindly. In this sense, the three laws of Asimov remain relevant.

Every day new innovations are born, raising new questions. This is the case for more and more automations in aircraft cockpits.

"Transforming the plane into a drone and bringing it back to the ground in case of both pilots' inability (terrorist act, for example) is technically imaginable, but poses many problems. Will we be sure that it is the airline that takes control from the ground? Which airport does it land on? Is it possible in all regions of the world? "



The European Parliament has put together a report to tackle the development of AI.
According to this report, automation will have a largely positive impact on society, but this impact needs to be monitored, particularly with regard to the evolution of the labor market and the distribution of wealth. We must keep an eye on military uses, particularly those of nations outside the European Union. The need to put in place a global agreement as soon as possible was raised.

It is necessary to standardize and regulate not only to avoid possible excesses and prepare the ground for the future, but also to allow a development of technologies today. The perfect example is the autonomous vehicle. "We can not ask manufacturers to have 28 procedures and 28 versions of their software for each country in the Union, with the car restarting at the border," said Roberto Viola, Director General of DG Connect. >

b) The outline of the report of the European Parliament

The question of civil and legal liability in the event of an accident:

- The idea of ​​a legal status of "electronic person", is a track for possible future systems with a highly developed autonomy and learning capacity.
-A robot or an AI remains a tool. The responsibility lies with the manufacturer and the person who uses it.
At present, between 80% and 93% of road accidents are due to human error. Automation is therefore highly desirable, but will imply increased manufacturer responsibility, especially on the software aspect.

The impact on employment:

Destruction of jobs is to be expected but they will be offset by other jobs created in return. There may be an imbalance between destruction and job creation and a very unequal distribution of wealth.
A tax on robots has been introduced to offset job losses.

Read more