Roboethics: A beginning of conversation about the Artificial Intelligence and Ethics

0
211

The purpose of this series of articles is to understand and reflect on something that gradually becomes more and more common in the life of society. The use of robots, for example, once present only in the world of science fiction, today leaves the screens of cinema to become reality. In fact, these technologies are applied in different areas of human life and, in many situations, raise complex questions, due to the same ambivalences deriving from the technologies themselves and from their possibilities of damaging the human being, as well as other forms of life that surround. Thus, from this new field of discussion, a new area of ​​ethics applied to robotics is inaugurated in order to reflect on these problems.

The film “Bicentennial Man”(also available in other languages) tells the story of a robot, “Andrew”, which is purchased to perform domestic services and gradually humanizes himself. His presence in the family generates tensions and discomforts, surprises and expectations. In this process of humanizing the humanoid, some characteristics emerge identifying emotions, creativity, sensitivity, the desire to change the rigid face to better convey one’s feelings, desire for freedom, desires and sexual urges, to love, to grow old and to die. Therefore, the fabric of the process of automating the automaton touches on fundamental and experiential questions that only human beings can do. In the film, this process of humanization generates three realities: the strangeness of being human with a humanoid, the internal reaction of the family; the fear that the creature overcomes the human and becomes uncontrollable, manifested in the expression of its seller when it wants to destroy it and the belief in the development of such technologies, in the image of its owner who does not agree to destroy it, and sees it as a companion, believes in the car and wants to strengthen it.

Already at the beginning of the film, Andrew, when connected, presents in his instruction manual the so-called principles of Isaac Asimov (1920-1992) or the three laws of robotics: 1) A robot cannot harm a human being or, for inaction, allowing a human being to be damaged; 2) A robot must obey orders given by man, unless such orders conflict with the first law and 3) a robot must protect its existence to the extent that such protection does not conflict with the first or the second law. So, even in a fictitious way, the work already poses a series of questions that will be explored later.

Fr. Rogério Gomes, CSsR.

Print Friendly, PDF & Email