The researchers from Cambridge University caution against the potential misuse of AI chatbots based on deceased individuals, urging for safeguards to protect emotional well-being and privacy.

Researchers from Cambridge University have issued warnings about AI chatbots developed based on deceased individuals, highlighting ethical and emotional concerns. These “deadbots” can imitate the deceased by analyzing their digital footprints like messages and social media activities. The technology, driven by the digital afterlife industry (DAI), originally seeks to help mourners by recreating their loved ones’ speech patterns and personal quirks.

However, scenarios outlined by the researchers describe potential misuse of this technology, including deadbots promoting products or disturbingly convincing users of their continued existence. For instance, in hypothetical examples, a grandmother’s AI recommends commercial products over personal recipes, and a recreated mother convinces her child she is still alive.

After widespread use and criticism, including pushback from significant AI entities like OpenAI, the researchers suggest that while closing down such services isn’t necessary, implementing safeguards is crucial. They recommend features like ways to retire deadbots, clear disclaimers about their functions and limitations, access restrictions for minors, and ensuring consent from both data donors and users to protect emotional well-being and privacy.

Share.

Ivan Massow Senior Editor at AI WEEK, Ivan, a life long entrepreneur, has worked at Cambridge University's Judge Business School and the Whittle Lab, nurturing talent and transforming innovative technologies into successful ventures.

Leave A Reply

Exit mobile version