Artikel in einem Konferenzbericht,

Artifact Evaluation: Is It a Real Incentive?

, und .
2017 IEEE 13th International Conference on e-Science (e-Science), Seite 488-489. (Oktober 2017)
DOI: 10.1109/eScience.2017.79

Zusammenfassung

It is well accepted that we learn hard lessons when implementing and re-evaluating systems, yet it is also acknowledged that science faces a crisis in reproducibility. Experimental computer science is far from immune, although it should be easier for CS than other sciences, given the emphasis on experimental artifacts, such as source code, data sets, workflows, parameters, etc. The data management community pioneered methods at ACM SIGMOD 2007 and 2008 to encourage and incentivize authors to improve their software development and experimental practices. Now, after 10 years, the broader CS community has started to adopt Artifact Evaluation (AE) to review artifacts along with papers. In this paper, we examine how AE has incentivized authors, and whether the process is having a measurable impact. Our answer can help guide CS, and more broadly, other computationally-oriented sciences, in encouraging peer-review of software artifacts and developing additional community practices for incentives.

Tags

Nutzer

  • @hermann

Kommentare und Rezensionen