Semantic or just syntactic ability? Why in the age of AI we still need (more) humanism

0
41

The article by prof. A. Pizzichini on the Blog of the Alphonsian Academy

“Although AI [artificial intelligence] can simulate some aspects of human reasoning and perform certain tasks with incredible speed and efficiency, its computational capabilities represent only a fraction of the broader possibilities of the human mind. For example, it cannot currently replicate moral discernment and the capacity to form authentic relationships.”(AN , n. 32)

Thus the recent Note  Antiqua et nova, jointly signed by the Dicasteries for the Doctrine of the Faith and for Culture and Education (here). A synthetic, but probably effective, way to highlight one of the differences between human intelligence and AI is that the former has “typically human capacities for semantic understanding and creative production” (ibid., n. 22), while the latter would be confined to the formal or syntactic sphere, being unable to do anything other than logical-mathematical calculations. This is a useful argument to avoid too hasty “equations”, even if it can clash with the experience that each of us can have with the latest generation chatbots, which in “intelligent” tasks (i.e. which presuppose abilities presumably linked to understanding) are proving to be increasingly more efficient, even more than human competitors (here).

So, the fact that AI is a sophisticated “calculator” is just a simplistic metaphor, which helps us to lull ourselves into a superiority complex that will soon be crushed by the evidence? Now, the fact that AI is “syntactic” in nature does not mean that it acts rigidly through pre-set rules, which would make its outputs predictable and repetitive. This expectation is perhaps based on a certain dualistic conception of the distinction between the signifier and the meaning of a word: on the one hand, that is, there would be the sign, graphic or spoken, of the word (the signifier), of a  purely  conventional nature, while on the other hand there would be the content, the idea expressed by the term (the meaning), accessible exclusively to a mind capable of grasping it. 

In reality, the meaning of the terms somehow influences the respective signifier. In fact, the various signifiers are, yes, conventional, but their use is not arbitrary, being determined precisely by the meaning of the words: this means that there is a network of relationships between words in a given language, for example, associations of the noun-adjective type or capital-State, identifiable through the concrete recurrences in the various texts. In other words, if one manages to reconstruct the above relationships in some way, it is possible to carry out some “semantic” operations even without possessing a conscience or a thought. Therefore, in an automated context, what comes closest to the operation of grasping the meaning of a term is that of identifying the network of relationships in which the term itself is inserted. Therefore, thinking for example of a transformer  (the algorithm that is the “engine” of current advanced chatbots ), it associates numerical scores to certain words, which thus allows to reconstruct their context and relationships with other terms (a technique known as  Word embedding, here). This is not an easy operation at all, since the context of a word or a sentence can be very broad and complex, even hierarchically structured. Once the context problem is solved, the probability calculation and the sentence construction follow (here).

These techniques allow algorithms, for example, to make “intelligent” summaries of a passage or to be able to grasp concepts or main ideas in a text, which give the impression of a real understanding and not of a simple “syntactic” manipulation. But this is not the case: the operations it performs are syntactic in the sense that the algorithm works in terms of formal, extrinsic calculations and associations, and the effectiveness of this process is greatly influenced by training and subsequent fine tuning, precisely in order to determine the correct network of relationships between the words of a language. In a recent text of ours (here), we referred to these capabilities of AI by qualifying it as an “objective mind” or “structure of thought.” Of course, if we imagine that we too are nothing more than computers with the biological hardware, since our brain would do nothing more than process information like any data processor, all these distinctions appear to be nothing more than Byzantinism.

Indeed, it is quite possible to be fooled by the amazing performances of AI, especially if one loses the sense of person and truth. And this brings us to the terrain of anthropology – even before the theoretical one, here we mean the lived one. What makes the difference between a structure of thought and thought itself is intentionality, an openness to the truth (cf. AN, nn. 21-23): proof of this are the so-called “hallucinations” of chatbots, which are not malfunctions, but an expression of their functioning (here), which aims at an optimal adaptation to a certain situation based on a specific objective. 

Before getting lost in abstract theoretical distinctions, therefore, it is of fundamental importance not to lose contact with our “humanistic” tradition, with the sense of “who we are.” This is the ultimate task of education, instruction and training in general (all concepts that fall within the English term education): to arouse and cultivate the sense of the person and of truth, the loss of which is the real danger to be wary of, which can make AI a deadly weapon that humanity turns against itself instead of a formidable instrument for the humanization of the world.