Abstract
In conventional experimental designs, such as the central composite design (CCD), the behavior in terms of statistical test power has been studied. However, these standard experimental designs do not apply to all situations. In the case of restrictions of the experimental effort or available design space, optimal test designs offer an alternative, but their statistical test power properties are not documented. The statistical power is used as an indication of how effectively underlying effects are detected. This study quantifies the statistical test power of A-, D-, and V-optimal designs using Monte Carlo methods using simulations. The findings of this study demonstrate that optimal designs provide a viable alternative to standard test designs, showing superior power of effects compared to a CCD. However, reducing the number of runs indicates a clear negative impact on test power, which needs to be carefully considered in testing applications. Regarding effect detection, differences between optimal designs can be identified. Different optimal designs identify main, interaction, and quadratic effects with varying effectiveness and, thus, a different power. A qualitative relationship between optimality criteria and test power was observed, as improved optimality criteria indicate higher power. However, there is insufficient evidence that improved optimality will yield acceptable test power.
Links and resources
Tags
community