Publications

Perceptions of Explainable AI: How Presentation is Content

BRIO - Bias, Risk, Opacity in AI: Design, Verification, and Development of Trustworthy AI, 2025, Milan, Italy.
The European Conference on Artificial Intelligence's (ECAI) Doctoral Consortium, 2025, Bologna, Italy.
This research argues that explainable AI should prioritize user experience and human-centered design above all else when making AI systems understandable to people. The work proposes a research framework that considers implications across multiple disciplines - from law and psychology to software engineering and philosophy - addressing concerns about regulation, human perception, trust, and the ultimate goals of AI transparency.

Explainable AI in Time-Sensitive Scenarios: Prefetched Offline Explanation Model

Discovery Science 2024. Lecture Notes in Computer Science, vol 15244.
As AI models increasingly shape real-world outcomes rather than just predicting them, researchers developed POEM - a new algorithm that quickly explains how AI systems make decisions about images. The system generates visual examples, counter-examples, and highlighted areas to help people understand AI reasoning in time-sensitive situations, outperforming previous methods in both speed and explanation quality.