Refine Search

New Search

Results: 2

(searched for: doi:10.1145/3297858.3304019)
Save to Scifeed
Page of 1
Articles per Page
by
Show export options
  Select all
Subho S. Banerjee, Saurabh Jha, Zbigniew Kalbarczyk, Ravishankar K. Iyer
Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems; https://doi.org/10.1145/3445814.3446739

Abstract:
Hardware performance counters (HPCs) that measure low-level architectural and microarchitectural events provide dynamic contextual information about the state of the system. However, HPC measurements are error-prone due to non determinism (e.g., undercounting due to event multiplexing, or OS interrupt-handling behaviors). In this paper, we present BayesPerf, a system for quantifying uncertainty in HPC measurements by using a domain-driven Bayesian model that captures microarchitectural relationships between HPCs to jointly infer their values as probability distributions. We provide the design and implementation of an accelerator that allows for low-latency and low-power inference of the BayesPerf model for x86 and ppc64 CPUs. BayesPerf reduces the average error in HPC measurements from 40.1% to 7.6% when events are being multiplexed. The value of BayesPerf in real-time decision-making is illustrated with a simple example of scheduling of PCIe transfers.
Xiangyu Zhang, Ramin Bashizade, Yicheng Wang, , Alvin R. Lebeck
Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems; https://doi.org/10.1145/3445814.3446697

Abstract:
Statistical machine learning often uses probabilistic models and algorithms, such as Markov Chain Monte Carlo (MCMC), to solve a wide range of problems. Probabilistic computations, often considered too slow on conventional processors, can be accelerated with specialized hardware by exploiting parallelism and optimizing the design using various approximation techniques. Current methodologies for evaluating correctness of probabilistic accelerators are often incomplete, mostly focusing only on end-point result quality ("accuracy"). It is important for hardware designers and domain experts to look beyond end-point "accuracy" and be aware of how hardware optimizations impact statistical properties. This work takes a first step toward defining metrics and a methodology for quantitatively evaluating correctness of probabilistic accelerators. We propose three pillars of statistical robustness: 1) sampling quality, 2) convergence diagnostic, and 3) goodness of fit. We apply our framework to a representative MCMC accelerator and surface design issues that cannot be exposed using only application end-point result quality. We demonstrate the benefits of this framework to guide design space exploration in a case study showing that statistical robustness comparable to floating-point software can be achieved with limited precision, avoiding floating-point hardware overheads.
Page of 1
Articles per Page
by
Show export options
  Select all
Back to Top Top