Close Menu
AI Week
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
Trending

UN experts warn against market-driven AI development amid global concerns

September 20, 2024

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

GamesBeat Next 2023: Emerging leaders in video game industry to convene in San Francisco

September 20, 2024
Facebook X (Twitter) Instagram
Newsletter
  • Privacy
  • Terms
  • Contact
Facebook X (Twitter) Instagram YouTube
AI Week
Noah AI Newsletter
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
AI Week
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
Home»Education and Training»Unauthorized Use of Brazilian Children’s Images in AI Training Raises Privacy Concerns
Education and Training

Unauthorized Use of Brazilian Children’s Images in AI Training Raises Privacy Concerns

Kai LaineyBy Kai LaineyJune 11, 20240 ViewsNo Comments2 Mins Read
Share
Facebook Twitter LinkedIn WhatsApp Email

A recent report by Human Rights Watch exposes the inclusion of over 170 images of Brazilian children in an AI training dataset, raising concerns over privacy violations and deepfake generation.

A recent report by Human Rights Watch revealed the unauthorized use of over 170 images and personal details of Brazilian children in AI training datasets. These images, scraped from the web, were included in LAION-5B, a dataset utilized by various AI models, including Stability AI’s Stable Diffusion. The dataset derived its content from Common Crawl, a web scraping repository.

The data collection occurred from sources such as mommy blogs and YouTube videos, often posted with an expectation of privacy. LAION-5B, created by the German nonprofit LAION, holds more than 5.85 billion image-caption pairs.

Hye Jung Han, a children’s rights and technology researcher at Human Rights Watch, asserts that this practice violates children’s privacy and exposes them to significant risks. Deepfakes generated from such data further exacerbate these threats, potentially facilitating malicious use. In response, LAION has begun removing illegal content from the dataset, working with organizations like the Internet Watch Foundation.

YouTube and other platforms cite such scraping as a violation of their terms of service and have committed to taking action against these practices. The issue highlights concerns over AI’s ability to generate realistic deepfakes and the broader risks of data privacy, especially for children.

Share. Facebook Twitter LinkedIn Telegram WhatsApp Email Copy Link
Kai Lainey
  • X (Twitter)

Related News

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

Protege secures $10 million seed round to launch innovative AI training data platform

September 20, 2024

Industry finalists announced for multifamily workplace awards

August 16, 2024

Innovative security technology deployed at major European sporting events

August 16, 2024

iLearningEngines to showcase innovations at key investor conferences

August 16, 2024

VocTech analysis sheds light on post-election workforce dynamics

August 16, 2024
Add A Comment
Leave A Reply Cancel Reply

Top Articles

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

GamesBeat Next 2023: Emerging leaders in video game industry to convene in San Francisco

September 20, 2024

Alibaba Cloud unveils cutting-edge modular datacentre technology at annual Apsara conference

September 20, 2024

Subscribe to Updates

Get the latest AI news and updates directly to your inbox.

Advertisement
Demo
AI Week
Facebook X (Twitter) Instagram YouTube
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
© 2025 AI Week. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.