A recent discussion on the alleged imitation of a famous actress’s voice by ChatGPT has raised concerns about the human-like evolution of artificial intelligence. Columnists Amanda Ripley, Josh Tyrangiel, and Bina Venkataraman share their perspectives on AI interactions, highlighting the fine line between treating AI as human and recognising its limitations.
The discussion about ChatGPT allegedly mimicking a famous actress’s voice has sparked concerns about artificial intelligence (AI) becoming increasingly human-like. Columnists Amanda Ripley, Josh Tyrangiel, and Bina Venkataraman shared their experiences with AI in a recent conversation.
Amanda Ripley recalled treating ChatGPT as human, using polite phrases like “please” and “thank you.” Josh Tyrangiel mentioned that he started politely but later adopted a more direct and demanding approach. Amanda compared the evolving interaction to a long-term relationship, changing from courteous to more functional but less formal.
Bina Venkataraman suggested this behavior is rooted in human nature, noting that people often anthropomorphize non-human entities. She highlighted that while we may project advanced capabilities onto AI, these technologies are still far from becoming truly conscious or sentient beings.
Josh Tyrangiel also pointed out the “hallucination phenomenon” in AI, describing it as a software bug that doesn’t meet user expectations. He emphasized that our tendency to view AI as authoritative can make these glitches seem more significant, reflecting a misplaced trust in technology.
The full discussion is available on The Post’s “Impromptu” podcast feed.