ReLiNet: Stable and Explainable Multistep Prediction with Recurrent Linear Parameter Varying Networks
A. Baier, D. Aspandi, and S. Staab. Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China, page 3461--3469. International Joint Conferences on Artificial Intelligence Organization, (August 2023)Main Track.
DOI: 10.24963/IJCAI.2023/385
Abstract
Multistep prediction models are essential for the simulation and model-predictive control of dynamical systems. Verifying the safety of such models is a multi-faceted problem requiring both system-theoretic guarantees as well as establishing trust with human users. In this work, we propose a novel approach, ReLiNet (Recurrent Linear Parameter Varying Network), to ensure safety for multistep prediction of dynamical systems. Our approach simplifies a recurrent neural network to a switched linear system that is constrained to guarantee exponential stability, which acts as a surrogate for safety from a system-theoretic perspective. Furthermore, ReLiNet’s computation can be reduced to a single linear model for each time step, resulting in predictions that are explainable by definition, thereby establishing trust from a human-centric perspective. Our quantitative experiments show that ReLiNet achieves prediction accuracy comparable to that of state-of-the-art recurrent neural networks, while achieving more faithful and robust explanations compared to the model-agnostic
explanation method of LIME.
%0 Conference Paper
%1 Baier2023
%A Baier, Alexandra
%A Aspandi, Decky
%A Staab, Steffen
%B Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, IJCAI 2023, 19th-25th August 2023, Macao, SAR, China
%D 2023
%I International Joint Conferences on Artificial Intelligence Organization
%K ac-inmotion from:alexbaier myown
%P 3461--3469
%R 10.24963/IJCAI.2023/385
%T ReLiNet: Stable and Explainable Multistep Prediction with Recurrent Linear Parameter Varying Networks
%U https://doi.org/10.24963/ijcai.2023/385
%X Multistep prediction models are essential for the simulation and model-predictive control of dynamical systems. Verifying the safety of such models is a multi-faceted problem requiring both system-theoretic guarantees as well as establishing trust with human users. In this work, we propose a novel approach, ReLiNet (Recurrent Linear Parameter Varying Network), to ensure safety for multistep prediction of dynamical systems. Our approach simplifies a recurrent neural network to a switched linear system that is constrained to guarantee exponential stability, which acts as a surrogate for safety from a system-theoretic perspective. Furthermore, ReLiNet’s computation can be reduced to a single linear model for each time step, resulting in predictions that are explainable by definition, thereby establishing trust from a human-centric perspective. Our quantitative experiments show that ReLiNet achieves prediction accuracy comparable to that of state-of-the-art recurrent neural networks, while achieving more faithful and robust explanations compared to the model-agnostic
explanation method of LIME.
@inproceedings{Baier2023,
abstract = {Multistep prediction models are essential for the simulation and model-predictive control of dynamical systems. Verifying the safety of such models is a multi-faceted problem requiring both system-theoretic guarantees as well as establishing trust with human users. In this work, we propose a novel approach, ReLiNet (Recurrent Linear Parameter Varying Network), to ensure safety for multistep prediction of dynamical systems. Our approach simplifies a recurrent neural network to a switched linear system that is constrained to guarantee exponential stability, which acts as a surrogate for safety from a system-theoretic perspective. Furthermore, ReLiNet’s computation can be reduced to a single linear model for each time step, resulting in predictions that are explainable by definition, thereby establishing trust from a human-centric perspective. Our quantitative experiments show that ReLiNet achieves prediction accuracy comparable to that of state-of-the-art recurrent neural networks, while achieving more faithful and robust explanations compared to the model-agnostic
explanation method of LIME.},
added-at = {2023-06-13T10:22:08.000+0200},
author = {Baier, Alexandra and Aspandi, Decky and Staab, Steffen},
biburl = {https://puma.ub.uni-stuttgart.de/bibtex/2ed8737e44fb8d97aa07ec64c776a2a81/analyticcomp},
booktitle = {Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence, {IJCAI} 2023, 19th-25th August 2023, Macao, SAR, China},
doi = {10.24963/IJCAI.2023/385},
interhash = {2e996c3803dd3030b150002c1cabf6a1},
intrahash = {ed8737e44fb8d97aa07ec64c776a2a81},
keywords = {ac-inmotion from:alexbaier myown},
month = aug,
note = {Main Track},
pages = {3461--3469},
publisher = {International Joint Conferences on Artificial Intelligence Organization},
timestamp = {2024-03-15T11:35:07.000+0100},
title = {ReLiNet: Stable and Explainable Multistep Prediction with Recurrent Linear Parameter Varying Networks},
url = {https://doi.org/10.24963/ijcai.2023/385},
year = 2023
}