[Back]


Talks and Poster Presentations (with Proceedings-Entry):

M. Shafique, R. Hafiz, M. Javed, S. Abbas, L. Sekanina, Z. Vasicek, V. Mrazek:
"Adaptive and Energy-Efficient Architectures for Machine Learning: Challenges, Opportunities, and Research Roadmap";
Talk: 2017 IEEE Computer Society Annual Symposium on VLSI (ISVLSI'17), 2017 IEEE Computer Society Annual Symposium on VLSI (ISVLSI'17); 2017-07-03 - 2017-07-05; in: "Proceedings of 2017 IEEE Computer Society Annual Symposium on VLSI (ISVLSI'17)", IEEE, (2017), ISSN: 2159-3477; 617 - 632.



English abstract:
Gigantic rates of data production in the era of Big Data, Internet of Thing (IoT) / Internet of Everything (IoE), and Cyber Physical Systems (CSP) pose incessantly escalating demands for massive data processing, storage, and transmission while continuously interacting with the physical world under unpredictable, harsh, and energy-/powerconstrained scenarios. Therefore, such systems need to support not only
the high performance capabilities at tight power/energy envelop, but also need to be intelligent/cognitive, self-learning, and robust. As a result, a hype in the artificial intelligence research (e.g., deep learning and other machine learning techniques) has surfaced in numerous communities. This paper discusses the challenges and opportunities for building energy-efficient and adaptive architectures for machine learning. In particular, we focus on brain-inspired emerging computing paradigms, such as approximate computing; that can further reduce the energy requirements of the system. First, we guide through an approximate computing based methodology for development of energy-efficient accelerators, specifically for convolutional Deep Neural Networks (DNNs). We show that in-depth analysis of datapaths of a DNN allows
better selection of Approximate Computing modules for energy-efficient accelerators. Further, we show that a multi-objective evolutionary algorithm can be used to develop an adaptive machine learning system
in hardware. At the end, we summarize the challenges and the associated research roadmap that can aid in developing energy-efficient and adaptable hardware accelerators for machine learning.

Keywords:
machine learning, approximate computing, deep learning, neural networks, energy efficiency, performance, low power, accelerators, architecture, memory, FPGA, CGRA, adaptive, roadmap


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1109/ISVLSI.2017.124