Close Menu
AI Week
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
Trending

UN experts warn against market-driven AI development amid global concerns

September 20, 2024

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

GamesBeat Next 2023: Emerging leaders in video game industry to convene in San Francisco

September 20, 2024
Facebook X (Twitter) Instagram
Newsletter
  • Privacy
  • Terms
  • Contact
Facebook X (Twitter) Instagram YouTube
AI Week
Noah AI Newsletter
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
AI Week
  • Breaking
  • Insight
  • Ethics & Society
  • Innovation
  • Education and Training
  • Spotlight
Home»Education and Training»New Method AIME by Musashino University for Explainable AI Developed by Takafumi Nakanishi
Education and Training

New Method AIME by Musashino University for Explainable AI Developed by Takafumi Nakanishi

Kai LaineyBy Kai LaineyMay 26, 20246 ViewsNo Comments3 Mins Read
Share
Facebook Twitter LinkedIn WhatsApp Email

Associate Professor Takafumi Nakanishi from Musashino University introduces AIME, a groundbreaking method for explainable AI. AIME focuses on reverse-calculating AI decisions, offering simpler and more intuitive explanations compared to existing XAI methods.

Researchers at Musashino University Develop New Method for Explainable AI

Tokyo, April 23, 2024 — Machine learning (ML) and artificial intelligence (AI) technologies are widely used across various industries, including automated driving, medical diagnostics, and finance. However, understanding how these advanced models make predictions remains challenging due to their complexity.

In response to this issue, Takafumi Nakanishi, an Associate Professor in the Department of Data Science at Musashino University, Japan, has introduced a new method called Approximate Inverse Model Explanations (AIME). This approach aims to provide more intuitive explanations of AI and ML decision-making processes. Unlike existing interpretive ML algorithms and explainable AI (XAI) models—such as Local Interpretable Model-Agnostic Explanations (LIME) and Shapley Additive Explanations (SHAP)—which solve forward calculations, AIME focuses on reverse-calculating AI decisions.

Nakanishi’s research, published in Volume 11 of IEEE Access in September 2023, explains that AIME constructs an inverse operator for an ML or AI model. This inverse operator helps estimate the significance of both local and global features on the model’s outputs. Additionally, AIME introduces a representative similarity distribution plot, which uses special representative estimation instances to explain how a particular prediction is related to other instances. This visualization helps provide insights into the complexity of the dataset distribution that the AI model is handling.

The study reveals that explanations derived from AIME are simpler and more intuitive than those provided by existing XAI methods like LIME and SHAP. AIME proved effective across various datasets, including tabular data, handwritten numbers, and text. Furthermore, the similarity distribution plot offered an objective way to visualize model complexity, making it easier to understand for those without deep technical knowledge.

Experiments conducted as part of Nakanishi’s study also indicated that AIME is more robust in handling multicollinearity—an issue where multiple features in a dataset are highly correlated. This robustness is particularly relevant in fields such as AI-generated art and the analysis of self-driving car records. As Nakanishi notes, “Self-driving cars will soon have recorders like those in airplanes, which could be analyzed by AIME to ascertain the cause of accidents through post-accident analysis.”

The development of AIME could play a crucial role in bridging the gap between humans and AI by providing clearer and more understandable explanations of AI decisions, thereby fostering deeper trust in these technologies.

About Takafumi Nakanishi
Takafumi Nakanishi is an Associate Professor in the Department of Data Science at Musashino University. He received his Ph.D. in engineering from the Graduate School of Systems and Information Engineering at the University of Tsukuba, Japan. His research interests include XAI, data mining, big data analysis systems, integrated databases, emotional information processing, and media content analysis.

For media inquiries, Takafumi Nakanishi can be contacted at +81 90-2239-9471 or [email protected].

Share. Facebook Twitter LinkedIn Telegram WhatsApp Email Copy Link
Kai Lainey
  • X (Twitter)

Related News

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

Protege secures $10 million seed round to launch innovative AI training data platform

September 20, 2024

Industry finalists announced for multifamily workplace awards

August 16, 2024

Innovative security technology deployed at major European sporting events

August 16, 2024

iLearningEngines to showcase innovations at key investor conferences

August 16, 2024

VocTech analysis sheds light on post-election workforce dynamics

August 16, 2024
Add A Comment
Leave A Reply Cancel Reply

Top Articles

IBM launches free AI training programme with skill credential in just 10 hours

September 20, 2024

GamesBeat Next 2023: Emerging leaders in video game industry to convene in San Francisco

September 20, 2024

Alibaba Cloud unveils cutting-edge modular datacentre technology at annual Apsara conference

September 20, 2024

Subscribe to Updates

Get the latest AI news and updates directly to your inbox.

Advertisement
Demo
AI Week
Facebook X (Twitter) Instagram YouTube
  • Privacy Policy
  • Terms of use
  • Press Release
  • Advertise
  • Contact
© 2025 AI Week. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.