Intrinsic Subgraph Generation for Interpretable Graph Based Visual Question Answering
P. Tilli, and N. Vu. Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), page 9204--9223. Torino, Italia, ELRA and ICCL, (May 2024)
Abstract
The large success of deep learning based methods in Visual Question Answering (VQA) has concurrently increased the demand for explainable methods. Most methods in Explainable Artificial Intelligence (XAI) focus on generating post-hoc explanations rather than taking an intrinsic approach, the latter characterizing an interpretable model. In this work, we introduce an interpretable approach for graph-based VQA and demonstrate competitive performance on the GQA dataset. This approach bridges the gap between interpretability and performance. Our model is designed to intrinsically produce a subgraph during the question-answering process as its explanation, providing insight into the decision making. To evaluate the quality of these generated subgraphs, we compare them against established post-hoc explainability methods for graph neural networks, and perform a human evaluation. Moreover, we present quantitative metrics that correlate with the evaluations of human assessors, acting as automatic metrics for the generated explanatory subgraphs. Our code will be made publicly available at link removed due to anonymity period.
%0 Conference Paper
%1 tilli-vu-2024-intrinsic
%A Tilli, Pascal
%A Vu, Ngoc Thang
%B Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
%C Torino, Italia
%D 2024
%E Calzolari, Nicoletta
%E Kan, Min-Yen
%E Hoste, Veronique
%E Lenci, Alessandro
%E Sakti, Sakriani
%E Xue, Nianwen
%I ELRA and ICCL
%K EXC2075 PN6-5 PN6-5(II) selected
%P 9204--9223
%T Intrinsic Subgraph Generation for Interpretable Graph Based Visual Question Answering
%U https://aclanthology.org/2024.lrec-main.806
%X The large success of deep learning based methods in Visual Question Answering (VQA) has concurrently increased the demand for explainable methods. Most methods in Explainable Artificial Intelligence (XAI) focus on generating post-hoc explanations rather than taking an intrinsic approach, the latter characterizing an interpretable model. In this work, we introduce an interpretable approach for graph-based VQA and demonstrate competitive performance on the GQA dataset. This approach bridges the gap between interpretability and performance. Our model is designed to intrinsically produce a subgraph during the question-answering process as its explanation, providing insight into the decision making. To evaluate the quality of these generated subgraphs, we compare them against established post-hoc explainability methods for graph neural networks, and perform a human evaluation. Moreover, we present quantitative metrics that correlate with the evaluations of human assessors, acting as automatic metrics for the generated explanatory subgraphs. Our code will be made publicly available at link removed due to anonymity period.
@inproceedings{tilli-vu-2024-intrinsic,
abstract = {The large success of deep learning based methods in Visual Question Answering (VQA) has concurrently increased the demand for explainable methods. Most methods in Explainable Artificial Intelligence (XAI) focus on generating post-hoc explanations rather than taking an intrinsic approach, the latter characterizing an interpretable model. In this work, we introduce an interpretable approach for graph-based VQA and demonstrate competitive performance on the GQA dataset. This approach bridges the gap between interpretability and performance. Our model is designed to intrinsically produce a subgraph during the question-answering process as its explanation, providing insight into the decision making. To evaluate the quality of these generated subgraphs, we compare them against established post-hoc explainability methods for graph neural networks, and perform a human evaluation. Moreover, we present quantitative metrics that correlate with the evaluations of human assessors, acting as automatic metrics for the generated explanatory subgraphs. Our code will be made publicly available at link removed due to anonymity period.},
added-at = {2025-01-27T13:09:00.000+0100},
address = {Torino, Italia},
author = {Tilli, Pascal and Vu, Ngoc Thang},
biburl = {https://puma.ub.uni-stuttgart.de/bibtex/274ebb87478e426fb1ca3ed8d21b1db0d/testusersimtech},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
editor = {Calzolari, Nicoletta and Kan, Min-Yen and Hoste, Veronique and Lenci, Alessandro and Sakti, Sakriani and Xue, Nianwen},
interhash = {924c42b00a0554ad9e2adc79ed955ad7},
intrahash = {74ebb87478e426fb1ca3ed8d21b1db0d},
keywords = {EXC2075 PN6-5 PN6-5(II) selected},
month = {05},
pages = {9204--9223},
publisher = {ELRA and ICCL},
timestamp = {2025-01-27T13:09:00.000+0100},
title = {Intrinsic Subgraph Generation for Interpretable Graph Based Visual Question Answering},
url = {https://aclanthology.org/2024.lrec-main.806},
year = 2024
}