Power/Performance Modeling and Optimization: Using and Characterizing Machine Learning Applications
2018-10-17T16:28:47Z (GMT) by
Energy and power are the main design constraints for modern high-performance computing systems. Indeed, energy eefficiency plays a critical role in performance improvement or energy saving for either state-of-the-art general purpose hardware platforms, such as FinFET-based<br>multi-core systems, or widely-adopted applications such as deep learning applications and in particular, convolutional neural networks. To achieve higher energy eefficiency, power and performance models are key in enabling various predictive management algorithms or optimization techniques. To have accurate models, one needs to consider not only technology-related effects, including process variation, temperature effect inversion, and aging, but also application-related effects, such as the interaction between applications with software and hardware layers.<br>In this thesis, we study these eects and propose to combine machine learning techniques and domain knowledge to learn the performance, power, and energy models for<br>high-performance computing systems. For technology-aware multi-core system design, we learn accurate performance and power models for FinFET-based multi-core systems considering various technology eects. By applying these models, we propose efficient power-<br>/performance-related management algorithms for multi-core systems to 1) increase performance under iso-power constraints; 2) reduce power while keeping the same performance; and 3) decrease aging effects with negligible power overhead for the same performance. For<br>application-aware computing system design, we propose a hierarchical framework based on sparse polynomial regression to predict the serving power, runtime, and energy consumption of deep learning applications, including convolutional neural networks. Extensive experimental results conrm the effectiveness of our proposed models, algorithms, and framework.