Thursday, 6.12.2012, 12:30
Due to the evolution of technology constraints, especially energy constraints which may lead to heterogeneous multicores, and the increasing number of transient faults and permanent defects, the design of defect-tolerant accelerators for heterogeneous multi-cores may become a major micro-architecture research issue.
Most custom circuits are highly defect sensitive, a single transistor can wreck such circuits. On the contrary, neural networks (NNs) are inherently error-tolerant algorithms. And the emergence of high-performance applications implementing recognition, mining and synthesis (RMS) tasks, for which competitive NN-based algorithms exist, drastically expands the potential application scope of a hardware NN accelerator.
In this talk, we want to outline that hardware neural network accelerators are not an esoteric proposition, but that they are well aligned with the current and upcoming constraints of our domain, and have a broad potential application scope. We will also present some initial hardware neural network designs, including a recently taped-out design, as well as upcoming designs based not only on classic digital CMOS technology, but also on analog logic, 3D stacking and memristors.
Olivier Temam is a senior research fellow at INRIA in Paris-Saclay, where he heads the BYMOORE (“BeYond MOORE”) group, and adjunct professor at Ecole Polytechnique. In the past, he led the ALCHEMY group at INRIA from 2004 to 2010, and before that, he was full professor at the University of Paris Sud. His research spans micro-architecture, simulation, compilation and programming models. In the past few years, he has been especially working on defect-tolerant and energy-efficient accelerators implemented using CMOS or alternative technologies, with a special emphasis on hardware neural network accelerators; he gave a keynote at ISCA in 2010 about this topic.