U.S. Treasury Secretary Janet Yellen highlights the opportunities and risks of artificial intelligence in the financial sector during a conference on financial stability, urging continuous monitoring and collaboration to mitigate potential dangers.
Janet Yellen Addresses AI Impacts at Financial Stability Conference
On Thursday, May 31, 2024, U.S. Treasury Secretary Janet Yellen delivered remarks at a conference on financial stability held at the U.S. Treasury Department in Washington, D.C., followed by events at the Brookings Institution. In her address, Yellen emphasized that while artificial intelligence (AI) presents significant opportunities for the financial sector, it also poses considerable risks.
Yellen pointed out AI’s current applications, such as forecasting, portfolio management, fraud detection, and enhanced customer service, noting its potential to make financial services more accessible and affordable through advancements in natural language processing and generative AI. She mentioned AI tools like OpenAI’s ChatGPT and Google’s Gemini, which have demonstrated impressive capabilities in generating text, images, and videos.
However, Yellen underscored the risks, including the “complexity and opacity” of AI models that function as “black boxes,” making their internal processes difficult for regulators and firms to understand. She cited additional concerns such as inadequate risk management frameworks, market concentration risks, and the potential for biased AI outputs that could affect financial decisions.
Yellen called for continuous monitoring of AI’s impact on financial stability, highlighting that regulators are using scenario analysis to anticipate future vulnerabilities. She noted ongoing efforts by the Financial Stability Oversight Council (FSOC) to track AI’s usage in financial markets and emphasized the need for both public and private sectors to collaborate on mitigating AI-related risks.
Separately, Yellen acknowledged that U.S. agencies, including the IRS and the Treasury Department, are already leveraging AI for tasks such as detecting tax fraud and financial crimes.
Meta’s New AI Data Policy Raises Privacy Concerns
Meta, headed by CEO Mark Zuckerberg, has announced changes to its data collection policy affecting users of Facebook and Instagram. Starting June 26, the company will use users’ data, including photos and posts, to train its AI systems. This decision applies even to individuals who do not use Meta’s services but are featured in content shared by others.
Meta’s updated privacy policy aims to improve AI capabilities such as its ChatGPT-style models, including Llama 3. Despite promising not to use the content of private messages, Meta’s move has sparked privacy concerns. Users can formally object to their data being used in this manner via a form on Instagram’s website, citing reasons aligned with GDPR provisions.
Meta justifies this policy under “legitimate interests,” a legal basis within GDPR that permits data processing without explicit user consent if deemed necessary and balanced against privacy rights. Users unsatisfied with Meta’s response to their objections can file complaints with regulatory bodies or pursue legal action.
These policy changes reflect Meta’s broader strategy to enhance its AI tools, revealing ongoing tensions between technological advancements and user privacy.