Close Menu
AI Week
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
Trending

UN experts warn against market-driven AI development amid global concerns

September 20, 2024

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

GamesBeat Next 2023: Emerging leaders in video game industry to convene in San Francisco

September 20, 2024
Facebook X (Twitter) Instagram
Newsletter
  • Privacy
  • Terms
  • Contact
Facebook X (Twitter) Instagram YouTube
AI Week
Noah AI Newsletter
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
AI Week
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
Home»Innovation»Biden Administration Issues Executive Order to Set Standards for AI Safety and Security
Innovation

Biden Administration Issues Executive Order to Set Standards for AI Safety and Security

Isaiah ZaidBy Isaiah ZaidJune 11, 20240 ViewsNo Comments3 Mins Read
Share
Facebook Twitter LinkedIn WhatsApp Email

The Executive Order by the Biden Administration focuses on establishing new standards for artificial intelligence (AI) safety and security, aiming to protect consumer privacy, advance equity and civil rights, and promote innovation. Federal agencies like the FTC are working to address AI bias and discrimination, while companies are urged to implement safeguards to avoid liability.

On October 30, 2023, the Biden Administration issued an Executive Order focused on establishing new standards for artificial intelligence (AI) safety and security. This directive aims to protect consumer privacy, advance equity and civil rights, and promote innovation. The Executive Order directs various agencies, like the Department of Homeland Security and the Department of Energy, to work on mitigating AI-related threats to critical infrastructure. Although it didn’t task the Federal Trade Commission (FTC) with specific actions, it indicated that the FTC should use its existing authority to address irresponsible AI usage, particularly concerning bias and discrimination.

AI systems have the potential to produce biased or discriminatory outcomes, often based on the data used to train these algorithms or through misuse of the models for unintended purposes. Companies using AI systems that result in biased treatment may face prosecution under Section 5 of the FTC Act for unfair practices, rather than transparent deception. For instance, the FTC settled a $3.3 million case against Passport Automotive for discriminatory lending practices impacting African American and Latino customers.

Federal agencies such as the FTC, the Department of Justice, Equal Employment Opportunity Commission, and Consumer Financial Protection Bureau have shown their commitment to addressing AI bias through a Joint Statement on enforcement efforts against discrimination in automated systems, issued on April 25, 2023. The FTC collaborates with these agencies and state attorneys general to address AI issues effectively.

Companies are urged to monitor and vet their contractors and vendors when deploying AI systems to avoid liability. Continuous diligence in this aspect is a crucial factor in FTC’s enforcement decisions. Clear and conspicuous disclaimers can also play a role but cannot absolve a company of liability for consumer harm.

The FTC’s enforcement actions may seek both injunctive relief and monetary penalties. Recently, the FTC has mandated the deletion of data and algorithms derived from data obtained through deceptive practices. Cases against Everalbum and WW International highlight this approach.

To limit liability, companies are advised to implement a range of safeguards such as conducting pre-release harm assessments, ensuring transparency with consumers, evaluating vendors’ capabilities, training employees, and performing continuous monitoring of AI systems. The FTC does not need to prove intent to prosecute under the unfairness theory but will consider the reasonableness of a company’s actions.

The landscape of AI regulation is swiftly evolving, with potential for more comprehensive federal guidelines inspired by frameworks like the European Union’s AI Act. Some states, including California, are already working on proactive AI regulations. While waiting for broader federal legislation, the FTC continues to leverage its authority under Section 5 to regulate AI practices.

Share. Facebook Twitter LinkedIn Telegram WhatsApp Email Copy Link
Isaiah Zaid
  • X (Twitter)

As the Innovation Editor at AI WEEK, Isaiah keeps readers informed about the latest AI advancements across industries. His expertise in emerging trends and keen eye for groundbreaking ideas make him a valuable resource for anyone interested in the future of AI innovation.

Related News

GamesBeat Next 2023: Emerging leaders in video game industry to convene in San Francisco

September 20, 2024

Dentistry.One unveils innovative SmileScan AI tool for oral health monitoring

September 20, 2024

Development economist Frank Bannor endorses Vice President Bawumia’s digitalisation vision for Ghana

August 16, 2024

California lawmakers advance landmark AI safety bill

August 16, 2024

Electoral advertising commences amidst concerns over AI and deepfake misuse

August 16, 2024

Colorado and EU lead in comprehensive AI legislation

August 15, 2024
Add A Comment
Leave A Reply Cancel Reply

Top Articles

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

GamesBeat Next 2023: Emerging leaders in video game industry to convene in San Francisco

September 20, 2024

Alibaba Cloud unveils cutting-edge modular datacentre technology at annual Apsara conference

September 20, 2024

Subscribe to Updates

Get the latest AI news and updates directly to your inbox.

Advertisement
Demo
AI Week
Facebook X (Twitter) Instagram YouTube
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
© 2025 AI Week. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.