Publications

Perceptions of Explainable AI: How Presentation is Content

Presented at "BRIO - Bias, Risk, Opacity in AI: Design, Verification, and Development of Trustworthy AI", 2025, Milan, Italy
This research argues that explainable AI should prioritize user experience and human-centered design above all else when making AI systems understandable to people. The work proposes a research framework that considers implications across multiple disciplines - from law and psychology to software engineering and philosophy - addressing concerns about regulation, human perception, trust, and the ultimate goals of AI transparency.

Explainable AI in Time-Sensitive Scenarios: Prefetched Offline Explanation Model

DS 2024. Lecture Notes in Computer Science, vol 15244.
As AI models increasingly shape real-world outcomes rather than just predicting them, researchers developed POEM - a new algorithm that quickly explains how AI systems make decisions about images. The system generates visual examples, counter-examples, and highlighted areas to help people understand AI reasoning in time-sensitive situations, outperforming previous methods in both speed and explanation quality.