Explainable AI has received significant attention in recent years. Machine learning models often operate as black boxes, lacking explainability and transparency while supporting decision-making processes. Local post-hoc explainability queries attempt to answer why individual inputs are classified in a certain way by a given model. While there has been important work on counterfactual explanations, less attention has been devoted to semifactual ones. In this paper, we focus on local post-hoc explainability queries within the semifactual ‘even-if’ thinking and their computational complexity among different classes of models, and show that both linear and tree-based models are strictly more interpretable than neural networks. After this, we introduce a preference-based framework enabling users to personalize explanations based on their preferences, both in the case of semifactuals and counterfactuals, enhancing interpretability and user-centricity. Finally, we explore the complexity of several interpretability problems in the proposed preference-based framework and provide algorithms for polynomial cases.

Alfano, G., Greco, S., Mandaglio, D., Parisi, F., Shahbazian, R., Trubitsyna, I. (2025). Even-if Explanations: Formal Foundations, Priorities and Complexity. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 15347-15355). Menlo Park : Association for the Advancement of Artificial Intelligence [10.1609/aaai.v39i15.33684].

Even-if Explanations: Formal Foundations, Priorities and Complexity

Shahbazian R.;
2025-01-01

Abstract

Explainable AI has received significant attention in recent years. Machine learning models often operate as black boxes, lacking explainability and transparency while supporting decision-making processes. Local post-hoc explainability queries attempt to answer why individual inputs are classified in a certain way by a given model. While there has been important work on counterfactual explanations, less attention has been devoted to semifactual ones. In this paper, we focus on local post-hoc explainability queries within the semifactual ‘even-if’ thinking and their computational complexity among different classes of models, and show that both linear and tree-based models are strictly more interpretable than neural networks. After this, we introduce a preference-based framework enabling users to personalize explanations based on their preferences, both in the case of semifactuals and counterfactuals, enhancing interpretability and user-centricity. Finally, we explore the complexity of several interpretability problems in the proposed preference-based framework and provide algorithms for polynomial cases.
2025
978-1-57735-897-8
1-57735-897-X
Alfano, G., Greco, S., Mandaglio, D., Parisi, F., Shahbazian, R., Trubitsyna, I. (2025). Even-if Explanations: Formal Foundations, Priorities and Complexity. In Proceedings of the AAAI Conference on Artificial Intelligence (pp. 15347-15355). Menlo Park : Association for the Advancement of Artificial Intelligence [10.1609/aaai.v39i15.33684].
File in questo prodotto:
File Dimensione Formato  
33684-Article Text-37752-1-2-20250410.pdf

accesso aperto

Tipologia: Versione Editoriale
Dimensione 193.37 kB
Formato Adobe PDF
193.37 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/10447/696263
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? ND
social impact