Microsoft’s introduction of the Recall feature in its AI-powered Copilot+ PCs has raised concerns among privacy advocates over potential exploitation risks. Despite assurances from Microsoft about local operation and data protection, experts warn of increased exposure to cyber attacks and misuse in various contexts.
Microsoft’s ‘Recall’ AI Feature Raises Privacy Concerns
On May 20, 2024, Microsoft introduced a new feature named Recall, part of its AI-powered Copilot+ PCs. Recall takes periodic screenshots of a user’s screen to create what the company refers to as a “photographic memory,” intended to aid users by making historical screen content, such as documents, images, and websites, searchable via AI-driven semantic indexing.
Privacy advocates have voiced concerns over potential exploitation risks. Despite Microsoft’s assurances that the tool operates locally on the device and does not capture private web sessions, experts worry about the unintentionally increased exposure to cyber attacks. Microsoft allows users to manage which screens are captured and to delete individual or bulk images.
The UK’s Information Commissioner’s Office (ICO) is investigating to ensure sufficient privacy protections are in place. Experts like Jen Golbeck from the University of Maryland warn of potential misuse by cyber criminals, employers, or even in personal contexts like abusive relationships.
Microsoft maintains that user data remains on the device and is protected, with CEO Satya Nadella emphasizing that all processes occur locally. Despite reassurances, some experts and privacy advocates argue that the feature may present more risks than benefits.