Vincenzo Tiani has written an excellent summary in Wired Italia of the recently published “Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations“, by AI4People, a task force of European experts.
With the help of DeepL (also AI), you can find the English translation below:
The technology is just around the corner, but in order to avoid apocalyptic scenarios, some European experts have outlined the ethical principles to adhere to.
Europe is preparing for artificial intelligence (AI). And you can’t talk about it without addressing the ethical question. Since artificial intelligence will be able to make decisions autonomous from people, it is fundamental that ethical rules, in addition to regulatory ones, are shared by all to lay the foundations for a positive and undistorted development that will last in the years to come. A point that was confirmed by Giovanni Buttarelli, the European Data Protection Supervisor, a few days ago.
For this reason Atomium – European Institute for Science, Media and Democracy launched the AI4People project, a task force of experts from all over Europe who have published a document with a series of principles and recommendations to lay the foundations for a positive, inclusive artificial intelligence at the service of people. The work was led by Italian Luciano Floridi, professor of philosophy at Oxford. In the team of thirteen university professors three other Italians: Monica Beltrametti (Naver Corporation), Ugo Pagallo (University of Trento) and Francesca Rossi (IBM Research and University of Padua).
Four principles plus one
Before making recommendations for the development of a healthy AI, the experts identified a number of principles. The first four were borrowed from bioethics, the fifth was added from scratch.
The first is beneficence. Artificial intelligence must be developed in order to promote human dignity in a way that is sustainable for the planet.
The second is non-maleficence [“to not harm”]. An uncontrolled development of AI can also lead to negative effects, such as the violation of privacy and of human security itself. These effects are to be prevented both when they are deliberate (see the invasive control policies of some governments) and when they are accidental. Artificial intelligence itself “should work against the risks that may arise from technological development“.
The third principle is that of autonomy. People must have the right to make decisions about their own treatment. It is therefore appropriate to find a balance between decisions taken by people and those taken by machines. To give an example: “The pilot must always be able to remove the automatic pilot” and therefore any decision of AI must be able to be blocked by humans.
The fourth is justice. “The development of Ai should promote justice and aim to eliminate all kinds of discrimination.” This means eliminating past inequalities but also not create new ones and prevent creating a two-speed world. The benefits of artificial intelligence must be maximally shared.
The last principle, the new one, is that of explicability. Since few people are designing our future, it is important to understand how they are doing it and based on what rules. What is behind the decisions of an algorithm? Who is responsible for those decisions? These processes must be known to everyone, not only to experts, to create the trust that will be the basis of the relationship between man and machine.
The paper concludes with twenty recommendations for those working on the development of Ai, be they private individuals, governments or institutions. At the heart of the paper is the need to work to “ensure people’s trust in artificial intelligence, serve the interests of the community and strengthen shared social responsibility“, a work that must be continuous over time.
First of all, it is important to “evaluate the capacity of the courts to remedy the errors made by AI systems“, choosing “which decisions cannot be delegated to the machine“. From the legislative point of view, however, it will be good to have a dialogue between laws and ethics in order to create a regulatory framework that can predict the future development of technology.
In order to solve the problem of the transparency of the algorithms, it will be necessary to give the courts themselves the tools to understand how to investigate these matters. In addition, there will be mechanisms to identify AI’s prejudices, which can lead to illegal inequalities in the treatment of people, as could happen in the insurance field. Finally, we need to set up a European observatory and a European agency to oversee the development of AI-based services and software.
In order to create the best possible solution, institutions and companies will have to finance and encourage the development of technologies that work for the common good and with respect for the environment, taking into account the legal, social and ethical aspects, verifying with surveys among citizens what consequences could derive from their use.
Finally, the experts advised us to think about internal codes of conduct for professions such as medical and legal, that will work with data and AI. At the same time, top management will have to start to look into the ethical aspects of their decisions. Anyone dealing with AI will have to assess more deeply the social, legal and ethical implications that the use of AI involves and this knowledge should be part of his/h curriculum.