Hitchhiker's Guide for Explainability in Autoscaling
F. Klinaku, S. Speth, M. Zilch, und S. Becker. Companion of the 2023 ACM/SPEC International Conference on Performance Engineering, Seite 277–282. New York, NY, USA, Association for Computing Machinery, (15.04.2023)
DOI: 10.1145/3578245.3584728
Zusammenfassung
Cloud-native applications force increasingly powerful and complex autoscalers to guarantee the applications' quality of service. For software engineers with operational tasks understanding the autoscalers' behavior and applying appropriate reconfigurations is challenging due to their internal mechanisms, inherent distribution, and decentralized decision-making. Hence, engineers seek appropriate explanations. However, engineers' expectations on feedback and explanations of autoscalers are unclear. In this paper, through a workshop with a representative sample of engineers responsible for operating an autoscaler, we elicit requirements for explainability in autoscaling. Based on the requirements, we propose an evaluation scheme for evaluating explainability as a non-functional property of the autoscaling process and guide software engineers in choosing the best-fitting autoscaler for their scenario. The evaluation scheme is based on a Goal Question Metric approach and contains three goals, nine questions to assess explainability, and metrics to answer these questions. The evaluation scheme should help engineers choose a suitable and explainable autoscaler or guide them in building their own.
%0 Conference Paper
%1 Klinaku2023
%A Klinaku, Floriment
%A Speth, Sandro
%A Zilch, Markus
%A Becker, Steffen
%B Companion of the 2023 ACM/SPEC International Conference on Performance Engineering
%C New York, NY, USA
%D 2023
%I Association for Computing Machinery
%K autoscaling cloud-computing elasticity explainability
%P 277–282
%R 10.1145/3578245.3584728
%T Hitchhiker's Guide for Explainability in Autoscaling
%U https://doi.org/10.1145/3578245.3584728
%X Cloud-native applications force increasingly powerful and complex autoscalers to guarantee the applications' quality of service. For software engineers with operational tasks understanding the autoscalers' behavior and applying appropriate reconfigurations is challenging due to their internal mechanisms, inherent distribution, and decentralized decision-making. Hence, engineers seek appropriate explanations. However, engineers' expectations on feedback and explanations of autoscalers are unclear. In this paper, through a workshop with a representative sample of engineers responsible for operating an autoscaler, we elicit requirements for explainability in autoscaling. Based on the requirements, we propose an evaluation scheme for evaluating explainability as a non-functional property of the autoscaling process and guide software engineers in choosing the best-fitting autoscaler for their scenario. The evaluation scheme is based on a Goal Question Metric approach and contains three goals, nine questions to assess explainability, and metrics to answer these questions. The evaluation scheme should help engineers choose a suitable and explainable autoscaler or guide them in building their own.
%@ 9798400700729
@inproceedings{Klinaku2023,
abstract = {Cloud-native applications force increasingly powerful and complex autoscalers to guarantee the applications' quality of service. For software engineers with operational tasks understanding the autoscalers' behavior and applying appropriate reconfigurations is challenging due to their internal mechanisms, inherent distribution, and decentralized decision-making. Hence, engineers seek appropriate explanations. However, engineers' expectations on feedback and explanations of autoscalers are unclear. In this paper, through a workshop with a representative sample of engineers responsible for operating an autoscaler, we elicit requirements for explainability in autoscaling. Based on the requirements, we propose an evaluation scheme for evaluating explainability as a non-functional property of the autoscaling process and guide software engineers in choosing the best-fitting autoscaler for their scenario. The evaluation scheme is based on a Goal Question Metric approach and contains three goals, nine questions to assess explainability, and metrics to answer these questions. The evaluation scheme should help engineers choose a suitable and explainable autoscaler or guide them in building their own.},
added-at = {2025-01-13T13:33:26.000+0100},
address = {New York, NY, USA},
author = {Klinaku, Floriment and Speth, Sandro and Zilch, Markus and Becker, Steffen},
biburl = {https://puma.ub.uni-stuttgart.de/bibtex/29196fbbfb4275c103f46d3bb625ace20/klinaku},
booktitle = {Companion of the 2023 ACM/SPEC International Conference on Performance Engineering},
day = 15,
doi = {10.1145/3578245.3584728},
interhash = {862837d453b2262b4407b56f666e41aa},
intrahash = {9196fbbfb4275c103f46d3bb625ace20},
isbn = {9798400700729},
keywords = {autoscaling cloud-computing elasticity explainability},
location = {Coimbra, Portugal},
month = {4},
pages = {277–282},
publisher = {Association for Computing Machinery},
series = {ICPE '23 Companion},
timestamp = {2025-01-13T13:33:26.000+0100},
title = {Hitchhiker's Guide for Explainability in Autoscaling},
url = {https://doi.org/10.1145/3578245.3584728},
year = 2023
}