Exploiting processor heterogeneity for energy efficient context inference on mobile phones

Abstract
In recent years we have seen the emergence of context-aware mobile sensing apps which employ machine learning algorithms on real-time sensor data to infer user behaviors and contexts. These apps are typically optimized for power and performance on the app processors of mobile platforms. However, modern mobile platforms are sophisticated system on chips (SoCs) where the main app processors are complemented by multiple co-processors. Recently chip vendors have undertaken nascent efforts to make these previously hidden co-processors such as the digital signal processors (DSPs) programmable. In this paper, we explore the energy and performance implications of off-loading the computation associated with machine learning algorithms in context-aware apps to DSPs embedded in mobile SoCs. Our results show a 17% reduction in a TI OMAP4 based mobile platform's energy usage from off-loading context classification computation to the DSP core with indiscernible latency overhead. We also describe the design of a run-time system service for energy efficient context inference on Android devices, which takes parameters from the app to instantiate the classification model and schedules the execution on the DSP or app processor as specified by the app.
Funding Information
  • National Science Foundation (0910706, 0905580, 1029030)

This publication has 8 references indexed in Scilit: