February 13, 2026
Haadia Amjad Receives Best Paper Award at EXPLAINABILITY 2025
The EXPLAINABILITY 2025 conference, held in Barcelona from 26–30 October 2025, brought together international researchers dedicated to advancing the theory, methods, and practical deployment of explainable systems in the fields of AI, machine learning, data science, and software engineering. The conference aimed to address issues of transparency, interpretability, fairness, accountability, and trust in complex models. The Best Paper Award at the conference was given to three researchers whose work contributed to solving practical and applicative aspects of state-of-the-art XAI methods. Haadia Amjad was among them and was also invited to submit an extension of her work to IARIA-Journal.
In her paper, Haadia focused on using concept-based explainable AI (CXAI) to identify and analyze confusion patterns in deep neural networks for multi-label image classification. Overlapping labels and contextual noise often lead to ambiguous or incorrect predictions in this field. To address the issue of understanding why models become confused, she examined concept relevance and importance, showing that low concept distinctiveness and reliance on environmental (non-target) concepts were key contributors to misclassification. The study demonstrated how CXAI could diagnose learning weaknesses and dataset-induced biases that were not visible through performance metrics alone by applying CRP and CRAFT to real-world datasets. This helps improve the interpretability and trustworthiness of AI systems.
Since the Explainable AI research community is still growing, only a few conferences have been dedicated to this field thus far. “Winning the ‘Best Paper Award’ at a conference specifically about my field of research is a huge honor. It motivates me to aim even higher in the future,” says Haadia. What she found particularly interesting about the conference was not only the technical side but also the specific target areas and issues in the field. She said, “It was interesting to see domain-specific problems and issues with existing methods being solved together in one place. This kind of environment supports the continued learning of field researchers and creates a platform for the fruitful exchange of knowledge.”
About Haadia Amjad
Haadia Amjad is a member of the SECAI Graduate School and conducts research in the Explainable Artificial Intelligence Group at the Chair of Fundamentals of Engineering (XAI-GE). This group is committed to advancing xAI methods and applying them across various domains, including surgical skill assessment, autonomous driving, and mechanical anomaly detection. Under the supervision of Professor Ronald Tetzlaff, the group collaborates with leading institutions to ensure that their research remains at the cutting edge. Haadia's participation in the conference highlights the group's commitment to applying the most recent research to create practical solutions that improve the transparency and effectiveness of AI systems.