Abstract

Approximate computing algorithms cover a wide range of different applications and the boundaries to domains like variable-precision computing, where the precision of the computations can be online adapted to the needs of the application 1, 2, as well as probabilistic and stochastic computing 3, which incorporate stochastic processes and probability distributions in the target computations, are sometimes blurred. The central idea of purely algorithm-based approximate computing is to transform algorithms, without necessarily requiring approximate hardware, to trade-off accuracy against energy. Early termination of algorithms that exhibit incremental refinement 4 reduces iterations at the cost of accuracy. Loop perforation 5 approximates iteratively-computed results by identifying and reducing loops that contribute only insignificantly to the solution. Another group of approximate algorithms is represented by neural networks, which can be trained to mimic certain algorithms and to compute approximate results 6. Today, approximate computing is predominantly proposed for applications in multimedia and signal processing with a certain degree of inherent error tolerance. However, in order to fully utilize the benefits of these architectures, the scope of applications has to be significantly extended to other computeintensive tasks, for instance, in science and engineering. Such an extension requires that the allowed error or the required minimum precision of the application is either known beforehand or reliably determined online to deliver trustworthy and useful results. Errors outside the allowed range have to be reliably detected and tackled by appropriate fault tolerance measures.

Links and resources

Tags

community

  • @dblp
  • @clausbraun
@clausbraun's tags highlighted