Will artificial intelligence be the 'lie detector' of the future?

AI recognises lies through language analysis better than humans.

Pexels

We are not good at spotting liars. In the absence of context, even the experts, those who deal professionally with situations where it is crucial to establish the truth of facts and statements - policemen, judges, lawyers, psychologists - when put to the test get it right in just over half the cases. Like tossing a coin.

This is also why researchers have long been working to identify tools and technologies to improve human abilities to identify lies. Research on the lie detection has a long history. The famous polygraph, the 'lie detector' as it is usually portrayed in the media, monitors certain physiological parameters, such as heart rate and respiration, to measure the stress that the liar being tested is supposed to feel while telling lies.

The monitoring of facial expressions by means of electrodes is another method on which attempts to detect signs of lying have focused, or that of eye movements (eye-tracking), which would have distinct characteristics in the liar compared to the truth teller. Another aspect that is looked at in the possible search for clues to lying is language: the starting hypothesis, confirmed by various studies, is that the speech of the liar contains distinctive signals compared to that of the truth teller. The problem, however, is how to recognise these clues and signals. As in the case of many other fields, a strand of research is trying to understand whether artificial intelligence, and in particular tools similar to Chat GPT that work precisely on language processing, can help in this task.

A recent study by a group of researchers from Scuola IMT and the University of Padua, published in the journal Scientific Reportsreports the results of just such an attempt. The researchers first tried to identify the indicators in the text that can best distinguish a truth from a lie. This is what is known as stylometric analysis, a method of quantitatively analysing a text to identify 'the style', and thus the distinctive characteristics with which it was written. This analysis, which was originally done by hand, and now by means of computer and artificial intelligence techniques that allow huge amounts of text to be analysed in a short time, has been used to identify four of what are considered by scientific research to be theoretical indicators of lying in speech. One is for example the so-called cognitive load: those who lie have to make a greater effort to process their false tale, and therefore the sentences they produce are usually simpler and less articulate than those of those who tell a true fact. Another example is an indicator called verifiability of details. Again, the hypothesis, verified in several studies, is that the false tale contains fewer details, which is reflected in a lower presence of concrete elements, such as place names, times and so on, than the truthful tale.

The researchers then analysed three datasets, one referring to true or false personal opinions; the second to autobiographical memories, i.e. memories of the past; the third to intentions for the future, real or invented. "The purpose of this first analysis was to understand whether the indicators of lying are different in different contexts, i.e. whether one indicator might 'work' to recognise lies that refer to an autobiographical memory, but perhaps not to the telling of a future intention," explains Riccardo Loconte, PhD student in Neuroscience at the IMT School and author of the research.

As a second step, the researchers then trained an artificial intelligence algorithm (FLAN-T5) to recognise lies in every single dataset, making it study which stories were true and which were false. The result is that the model learns, and rather well: it is able to recognise lies with an accuracy of around 80 per cent, much better than people, even experts, who get it right on average only 50 per cent of the time. The next step was to see if the artificial intelligence could find a general rule for lying in language, i.e. a linguistic signal that characterises all lies. And here the results were less encouraging: the artificial intelligence learns, but cannot generalise.

This could mean that no matter how hard you look for it, there is no universal rule in language that characterises all lies. Every type of lie - and probably every liar - is different. To try to make the algorithm's predictions accurate and generalisable, it will be necessary to train the artificial intelligence on different contexts and types of lies.

The provisional conclusion to be drawn is that artificial intelligence is undoubtedly better than humans at spotting liars, but at the moment it is not good enough to be used in a useful way in reality: for example, to expose a witness who lies in court. "These are nevertheless interesting results, although not immediately applicable in real-life contexts," Loconte notes. "In court, for example, where it is not yet possible to use and test tools such as the old lie detectors, an analysis on the language of the witness, perhaps combined with an analysis of facial expressions conducted on video recordings, could bring a significant improvement in the ability to recognise lies." For now, this remains a futuristic scenario.

You might also be interested in

SocietyMind and Brain

Rationality Pills #4 - Experts are (not) always right

How do we decide whom to trust? The philosophy of science tries to answer.

Technology and InnovationSociety

How smart is ChatGPT?

Q&A with the artificial intelligence software of the moment.

Technology and InnovationSociety

Where artificial intelligence will take us

The AI Index Report 2023 captures the state of the most talked about technology of the moment.

SocietyTechnology and Innovation

If we go extinct, it will not be ChatGPT's fault

A reflection on the real and supposed risks of artificial intelligence.