Close Menu
AI Week
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
Trending

UN experts warn against market-driven AI development amid global concerns

September 20, 2024

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

GamesBeat Next 2023: Emerging leaders in video game industry to convene in San Francisco

September 20, 2024
Facebook X (Twitter) Instagram
Newsletter
  • Privacy
  • Terms
  • Contact
Facebook X (Twitter) Instagram YouTube
AI Week
Noah AI Newsletter
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
AI Week
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
Home»Education and Training»Musashino University Researchers Develop AIME for Clearer Machine Learning Explanations
Education and Training

Musashino University Researchers Develop AIME for Clearer Machine Learning Explanations

News RoomBy News RoomMay 23, 20240 ViewsNo Comments3 Mins Read
Share
Facebook Twitter LinkedIn WhatsApp Email

Associate Professor Takafumi Nakanishi from Musashino University introduces Approximate Inverse Model Explanations (AIME) as a more intuitive approach to explaining AI and ML models, offering simpler insights compared to traditional methods like LIME and SHAP.

Musashino University Researchers Develop AIME for Clearer Machine Learning Explanations

Tokyo, April 23, 2024—Advancements in machine learning (ML) and artificial intelligence (AI) have revolutionised sectors such as autonomous driving, medical diagnostics, and finance. However, the complexity of these technologies has led to a growing demand for more transparent and comprehensible explanations of how they generate predictions.

Traditionally, methods like Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP) have been employed to elucidate these “black box” models. These techniques work by creating simple, approximate models to highlight which features most influence the machine’s decisions. While helpful, these forward-calculating methods can sometimes fall short in providing fully intuitive explanations.

In response to these challenges, Associate Professor Takafumi Nakanishi from the Department of Data Science at Musashino University, Japan, has introduced a novel approach termed Approximate Inverse Model Explanations (AIME). This new technique aims to offer a more intuitive interpretation of AI and ML models by “reverse-calculating” their decision-making processes. The study detailing AIME was published in September 2023 in the journal IEEE Access.

AIME distinguishes itself by estimating and constructing an inverse operator for an ML or AI model. This operator assesses the importance of both local and global features in determining the model’s outputs. Additionally, the approach uses representative similarity distribution plots to show how particular predictions relate to other instances, offering a clearer insight into the dataset’s structure.

The research found that explanations derived from AIME were simpler and more intuitive compared to those obtained from LIME and SHAP. AIME was tested across various datasets, including tabular data and handwritten numeric and textual information, demonstrating its versatility. Furthermore, the method showed robustness in managing multicollinearity—where independent variables in a dataset are highly correlated—which is a common issue in data analysis.

Dr. Nakanishi highlighted the potential of AIME in several practical applications: “AIME is particularly relevant for explaining AI-generated art. Additionally, it could play a crucial role in post-accident analyses for self-driving cars, similar to how flight recorders are used in aviation.”

This innovative development is seen as a step towards bridging the gap between human understanding and AI functionalities, fostering greater trust and reliance on these advanced technologies.

About Associate Professor Takafumi Nakanishi

Takafumi Nakanishi is an Associate Professor in the Department of Data Science at Musashino University. He received his Ph.D. in engineering from the University of Tsukuba in 2006. Before joining Musashino University, he served as an associate professor and chief researcher at the International University of Japan’s Global Communication Center from 2014 to 2018. His research interests span explainable AI, data mining, big data analysis systems, integrated databases, emotional information processing, and media content analysis.

For more insights into his work, Dr. Nakanishi can be contacted at Musashino University.

Share. Facebook Twitter LinkedIn Telegram WhatsApp Email Copy Link
News Room
  • Website

Related News

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

Protege secures $10 million seed round to launch innovative AI training data platform

September 20, 2024

Industry finalists announced for multifamily workplace awards

August 16, 2024

Innovative security technology deployed at major European sporting events

August 16, 2024

iLearningEngines to showcase innovations at key investor conferences

August 16, 2024

VocTech analysis sheds light on post-election workforce dynamics

August 16, 2024
Add A Comment
Leave A Reply Cancel Reply

Top Articles

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

GamesBeat Next 2023: Emerging leaders in video game industry to convene in San Francisco

September 20, 2024

Alibaba Cloud unveils cutting-edge modular datacentre technology at annual Apsara conference

September 20, 2024

Subscribe to Updates

Get the latest AI news and updates directly to your inbox.

Advertisement
Demo
AI Week
Facebook X (Twitter) Instagram YouTube
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
© 2025 AI Week. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.