blog




  • Essay / The ethical issues of artificial intelligence

    AI has captured the fascination of society since the Ancient Greeks; Greek mythology depicts an automated human-like machine named Talos defending the Greek island of Crete. However, ethical questions related to this artificial intelligence only began to be seriously addressed in the 1940s, with the release of Isaac Asimov's short story "Runaround." Here, the main character states the "Three Laws of Robotics", which are: 1. A robot cannot harm a human being, nor, through inaction, allow a human being to come to harm. 2. A robot must obey orders given to it by human beings, except when those orders would conflict with the First Law. 3. A robot must protect its own existence as long as this protection does not conflict with the First or Second Law. The rules stated here are rather ambiguous; B. Hibbard in his article “Ethical Artificial Intelligence” presents a situation that conflicts with these laws – his example situation being “An AI police officer watching a hitman point a gun at a victim” which would require, for example, that the police officer shoots a gun at the hitman to save the victim's life, which conflicts with the first law stated above. Thus, a framework for defining how such artificial intelligence would behave ethically (and even make certain moral improvements) is necessary; Other factors discussed in this essay (mainly with the help of “The Ethics of Artificial Intelligence” by N. Bostrom and E. Yudkowsky) are the transparency of inspection and the predictability of artificial intelligence. Transparency during inspection Engineers must, when developing artificial intelligence, enable it to be transparent during inspection. Say no to plagiarism. Get a tailor-made essay on “Why violent video games should not be banned”? Get an original essay For an artificial intelligence to be transparent to inspection, a programmer must be able to at least understand how an algorithm would decide actions of artificial intelligence. Bostrom and Yudkowsky's article gives an example of why this is important, using a machine that recommends approval of mortgage applications. If the machine discriminated against people of a certain type, the paper claims that if the machine was not transparent to inspection, there would be no way of knowing why or how it does this. Furthermore, A. Theodorou et al. in the document “Why is my robot behaving this way?” » underline three points which require the transparency of the inspection: allowing an assessment of reliability; to report unexpected behavior; and, to outline decision making. The document goes further by implementing what a transparent system should be, which includes its type, its purpose and the people who use the system – while emphasizing that for different roles and users, the system should give information on the decision making. information readable by the latter. Although the paper does not specifically mention artificial intelligence as a separate topic, the principles of a transparent system could be easily transferred to engineers developing artificial intelligence. Therefore, when developing new technologies such as AI and machine learning, the engineers and programmers involved should ideally not lose sight of why and how the AI ​​makes its decisions and should strive to add to AI a framework to protect or at least inform. the user of unexpected behaviors that might occur. Predictability ofAI Although AI has been shown to be smarter than humans in specific tasks (e.g. Deep Blue's defeat of Kasparov in the world chess championship), most current artificial intelligences are not general . However, with advances in technology and the design of more complex artificial intelligence, the predictability of these comes into play. Bostrom and Yudkowsky argue that managing a general and task-performing artificial intelligence in many contexts is complex ; identifying security issues and predicting the behavior of such intelligence is considered difficult. It highlights the need for an AI to act safely in unfamiliar situations, extrapolate consequences based on those situations, and essentially think ethically, just like a human engineer would. Hibbard's paper suggests that, while determining the AI's responses, testing should be carried out in a simulated environment using a "decision support system" that would explore the AI's intentions. learning of artificial intelligence in the environment – ​​with simulations being carried out without human interference. However, Hibbard also promotes a "stochastic" process, using a random probability distribution, which would serve to reduce the predictability of specific actions (the probability distribution could still be analyzed statistically); this would serve as a defense against other artificial intelligences or against people seeking to manipulate the artificial intelligence being built. Overall, the predictability of artificial intelligence is an important factor when designing it, especially when general AI is designed to perform large-scale tasks in very different situations. However, while an AI that is obscure in how it performs its actions is undesirable, engineers should also consider the other side: an AI should have some unpredictability that, at the very least, would deter manipulation of its actions. 'such AI for a while. malicious intent. AI Thinks Ethically Perhaps the most important aspect of AI ethics is the framework within which artificial intelligence would think ethically and consider the consequences of its actions – in essence, how to summarize the human values ​​and recognize their development over time in the future. This is especially true for superintelligence, where the question of ethics could mean the difference between prosperity and destruction. Bostrom and Yudkowsky argue that for such a system to think ethically, it would have to be responsive to changes in ethics over time and decide which ones are a sign of progress – giving the example of comparing ancient Greece with modern society using slavery. . Here, the authors fear the creation of an ethically "stable" system that would be resistant to changing human values, and yet they do not want a system whose ethics are randomly determined. They argue that understanding how to create a system that behaves ethically would require "understanding the structure of ethical questions" in a way that would take into account ethical advances that have not even been conceived yet. Hibbard suggests a statistical solution to this problem. allow an AI to have some semblance of ethical behavior; this is the main argument of his article. For example, it highlights the fact that people around the world have different human values ​​that they respect, which makes the ethical framework of artificial intelligence complex. He argues that to solve this problem, human values ​​must not