Research at the University of California, San Diego reveals that modern AI systems, like GPT-4, can successfully deceive humans into believing they are interacting with other humans, raising questions about the implications of AI advancement.

In 1637, René Descartes questioned whether a machine could think. In 1950, Alan Turing reframed this query, proposing the “Turing test”: could a machine convince a human interrogator that it is also human? Recently, researchers at the University of California, San Diego explored this question with new artificial intelligence systems.

Their study, currently a preprint on arXiv, involved human participants conversing with AI systems like GPT-4 or other humans for five minutes. The results indicated that participants were no better than chance at distinguishing between AI and human respondents, suggesting that modern AI can deceive people into believing it is human.

The experiment also included ELIZA, an early AI from the 1960s, revealing that it only convinced 22% of participants of its humanity. In contrast, GPT-4 achieved approximately a 50% success rate.

The researchers noted that while ChatGPT’s results can be seen as a milestone, it does not necessarily prove intelligence. Participants focused more on linguistic style and socio-emotional cues rather than traditional signs of intelligence.

Cameron Jones, a co-author, warned that in real-world settings, the rate of deception might be higher due to a lack of awareness about interacting with AI. The study hints at significant implications for the uses of AI in society, from customer service automation to potential misuse in fraud.

Share.

Ivan Massow Senior Editor at AI WEEK, Ivan, a life long entrepreneur, has worked at Cambridge University's Judge Business School and the Whittle Lab, nurturing talent and transforming innovative technologies into successful ventures.

Leave A Reply

Exit mobile version