If we go extinct, it will not be ChatGPT's fault

A reflection on the real and supposed risks of artificial intelligence.

Gabriele Costa | Professor of Computer Science, Scuola IMT Alti Studi Lucca
https://www.flickr.com/photos/152824664@N07/30212411048/

The debate on the introduction of AI and the impacts we should expect for our society is extremely lively, but not always perfectly lucid and rational. As is often the case, the risks tend to frighten us and overshadow the potential opportunities. AI is almost certainly on the verge of powerfully entering our daily lives, in various forms, and this will bring about major changes in our lives. Whether these changes will be positive or negative will also and above all depend on our understanding of this technology and the development of a constructive debate. The aim of this article is to contribute to the ongoing discussion by introducing a new perspective on the risks and potentials of AI, including the emergence of a Skynet.

AI and employment

Even before our eventual extinction, predicted by a group of experts who recently published a public letter  However, the main issue being debated concerns the impact on the world of work. While it is true that for some time now we have been accustomed to the idea that certain jobs are destined to disappear due to the replacement of man by automatic tools, many think that this fate only concerns physical, heavy and all in all unpleasant jobs, those in short, that it is difficult for anyone to regret. It is worrying to discover now that among the jobs threatened with extinction are also the journalist and the lawyer (assuming one can miss a divorce lawyer). Some people persist in thinking that it is impossible for this to happen, by virtue of the unparalleled (!) faculties of the human intellect. But this kind of consolatory thinking (whishful thinking - 'things will go in a positive direction anyway') that leads us to think that the activities most related to the intellect are safe from technological advances is typical of human beings and must, once again, prompt us to reflect deeply on our position in the universe. From the point of view of our species, the history of science (and more recently that of technology) outlines one great moral: we are not special (except in our ability to do enormous damage). The situation is vaguely reminiscent of the horse and early locomotive races in some western films, in which the horse perhaps managed to beat the locomotive. Even if in some cases the animal could outrun the first rudimentary steam engines, no one today would race a poor horse against a red arrow. From this point of view, we are probably in a very similar situation to the horse in the Wild West.

Comparing intelligences

Some might argue that, as written above, the competition between the locomotive and the horse was only 'muscular' in nature, between the steam engine and the biological engine, whereas AI challenges our intellect, attempts to dethrone us in the ranking of 'smartest'. True, but even in this case it is worth reflecting more carefully. The first thing to consider is that we are basically unable to define the term 'intelligence' to everyone's satisfaction. For example, a particularly rigorous and useful definition is that given in game theory, where an intelligent agent is one that is always able to identify the optimal strategy for achieving its goals. However, many do not think this definition is adequate to describe the intelligence of human beings, which is more related to creativity, i.e. the ability to imagine things that do not yet exist. Not to mention that if we accepted this definition, we would immediately come across as very unintelligent. Unfortunately, of intelligence, in the terms we like best, there are no satisfactory descriptions. 'Genius' remains something vague and elusive, a mysterious object that, as Alan Turing put it, is inside an onion with many layers and that, all things considered, might as well contain nothing. Indeed, in Linnaeus' classification, our species is called 'homo sapiens', not 'homo intelligens'. If we limit ourselves to 'sapiens', however, the challenge has already been lost for many years since no human being can really compete with a (nowadays trivial) search engine.

Thomas Edison's famous motto - 'genius is 1 per cent inspiration and 99 per cent perspiration' - reminds us that mental activity, even the most refined, consists of an exercise of a highly physical nature (99 per cent perspiration!) and in fact our brains consume a significant portion of our energy (around 20 per cent). In this sense, AI can make a significant contribution to our society, by accompanying the human being in all those activities of the intellect that are closest to physical exertion. For instance, in the field of scientific research, typically considered a highly intellectual activity, these kinds of tasks are very frequent (and often delegated to young students) and include defining experiments, collecting data, reviewing literature, preparing tables and graphs, and much more. 

What awaits us?

So, what should we expect from the spread and development of AI in the coming years? Will we go the way of horses and be 'condemned' to grazing while trains whiz past us? Will we be permanently eliminated from the planetary scene, Terminator-style? This also calls for a few considerations. The first is that the development of AI is not easily contained. This technology represents an enormous advantage from an economic point of view and in a context of competitive production, giving up this possibility entirely would be like rolling out a red carpet to one's competitors (foreign companies or nations). In this sense, indeed, we must expect a race to develop AI. This rapid evolution will probably not end with an AI actively engaged in the extermination of mankind (as the 42 per cent of CEOs interviewed at Yale). The main reason is the lack of motivation. The rational agents we are used to are biological, shaped by evolution to have goals such as survival or control of certain resources. The lion that chases you on the savannah does so to defend itself or to eat. This type of behaviour is far removed from the way a computer operates. The risk in these cases is more related to the flaws of the AI and the actions of humans who may try to take advantage of these flaws (if an AI controls the launching of nuclear missiles, the greatest risk is not that it becomes self-aware, but that it is compromised or manipulated by enemy agents). The reason why some believe that AI can intentionally attack humanity is very simple: they have seen it in the movies! Again, this is the result of a flaw in our way of reasoning, called 'availability heuristics' ('if I feel that something has happened, it means that it is quite probable'). For the same reason, my wife fears being devoured by a white shark when she goes swimming in San Terenzo.

A stimulus to rethink society?

Another important aspect is the impact on the world of work. Basically, the loss of jobs, including some of what we consider to be quality jobs, is a dramatic prospect, but this is especially true in a society 'founded' on work. Work, like intelligence, is a difficult concept to define and typically very vague. In general, we consider 'work' to be an activity (full or part-time) that involves 'formal' remuneration (stealing is not typically considered work). And in fact, many activities are assimilated to work only in the presence of this factor, take for instance sport where basically remuneration roughly marks the boundary between professionalism and amateurism. When we use terms such as 'new jobs' or 'emerging jobs' we are typically referring to a change in the level of remuneration linked to an activity: who would have thought you could make money playing video games on tiktok? As Bertrand Russell already noted in 'In Praise of Idleness' (1935) this attitude is linked to purely conventional moral principles that may change over time. In this sense, AI could be a key factor in prompting us to rethink our society. Think of a constitution that starts with 'Italy is a democratic republic founded on happiness' or 'on being human'). However, such a change, assuming it is meant to happen, will take time.

In collaboration

During this interval, the coexistence of AI and human intelligence will continue to be discussed. On this aspect, there are several reasons to be optimistic. The first and most important is that intelligence will almost certainly never be too abundant to allow us to dispense with the contribution of anyone, including humans. It is reasonable to expect that, in many contexts, people will be supported by AI-based tools. The main difference from other tools that have entered our everyday lives will be that we will no longer be able to speak of 'use' of the tool, but rather will have to speak of 'collaboration' with the tool. A second factor will be related to motivation. As mentioned earlier, the long path of evolution has provided us with a wide range of stimuli and motivations. Without this kind of drive, AIs are unable to take any particular initiatives other than those for which they have been configured. Consequently, if an AI will actively engage in destroying mankind, it will almost certainly be because a human asked it to (not that this is particularly reassuring in fact). 

AI as Green Lantern

So what will our coexistence with AI ultimately resemble? Although it is not easy to predict precisely, we can get a fairly accurate idea with a simile. In a recent book entitled 'Homo Deus. A Brief History of the Future', historian Yuval Noah Harari describes technological progress in terms of increasing human capabilities. In a way, evaluating the introduction of new technologies through the eyes of those who lived in the past, what happens is similar to the acquisition of new 'powers', not too different from those of superheroes. To a woman of the 1800s, for example, a television set might appear similar to a crystal ball, and to a man of the same era, a mobile phone call might appear analogous to telepathic communication. In their eyes we might then appear similar to a kind of Dr. Strange. If we do the same exercise, but in the opposite direction, and imagine what men and women who will be using AI on a daily basis a few years from now might look like to us, which superhero might they resemble? A rather suggestive hypothesis is Green Lantern. The reason is quite intuitive: this comic book superhero possesses a ring that can instantly materialise imagined objects. The ring's creations are not perfectly identical to real objects and their behaviour depends above all on Green Lantern's imaginative ability. In this sense, the AI that creates new content, guided by the users' ability to imagine and describe what is to be done, is not too different from the power ring. As in the comics and the movies, then, the ring is not a unique object, in the possession of a lone superhero, but a real technology at the disposal of a large group of individuals (the Green Lantern Corps, precisely). If we remove personal stories, intergalactic wars and supervillains from the story, what remains is a class of specialised operators who use the technology to perform extremely complex actions and to do the work of 100 men each. Similarly, a single journalist will be able to do the work of an entire newsroom and a software designer will be able to produce code as if leading an entire team of programmers. There will clearly be implications of various kinds and ethical issues, but almost certainly, as with Hal Jordan and Co., disposing of the ring will not be a viable option.

You might also be interested in

SocietyTechnology and Innovation

Artificial intelligence, human errors

When AI gets it wrong: why it happens, and how to avoid it.

SocietyTechnology and Innovation

Where artificial intelligence will take us

The AI Index Report 2023 captures the state of the most talked about technology of the moment.

SocietyTechnology and Innovation

How smart is ChatGPT?

Q&A with the artificial intelligence software of the moment.

SocietyTechnology and Innovation

Tell me where you look, and I will tell you what you choose

An artificial intelligence algorithm applied to gaze-tracking systems is able to read intentions in an online game: technology potentially...