Zum Hauptinhalt springen
Dekorationsartikel gehören nicht zum Leistungsumfang.
Embedded Deep Learning
Algorithms, Architectures and Circuits for Always-on Neural Network Processing
Buch von Bert Moons (u. a.)
Sprache: Englisch

139,09 €*

inkl. MwSt.

Versandkostenfrei per Post / DHL

Aktuell nicht verfügbar

Kategorien:
Beschreibung
This book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning.

Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices;

Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy ¿ applications, algorithms, hardware architectures, and circuits ¿ supported by real silicon prototypes;

Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations;

Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization¿s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts.
This book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning.

Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices;

Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy ¿ applications, algorithms, hardware architectures, and circuits ¿ supported by real silicon prototypes;

Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations;

Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization¿s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts.
Über den Autor

Dr. ir. Bert Moons received the B.S. and M.S. and PhD degree in Electrical Engineering from KU Leuven, Leuven, Belgium in 2011, 2013 and 2018. He performed his PhD research at ESAT-MICAS as an IWT-funded Research Assistant, focusing on energy-scalable and run-time adaptable digital circuits for embedded Deep Learning applications. Bert authored 15+ conference and journal publications, was a Visiting Research Student at Stanford University in the Murmann Mixed-Signal Group and received the SSCS predoctoral achievement award in 2018. Currently he is with Synopsys, as a hardware design architect for the DesignWare EV6x Embedded Vision and Deep Learning processors.

Daniel Bankman received the S.B. degree in electrical engineering from the Massachusetts Institute of Technology, Cambridge, MA in 2012 and the M.S. degree from Stanford University, Stanford, CA in 2015. Since 2012, he has been working toward the Ph.D. degree at Stanford University, focusing on mixed-signal processing for machine learning. He has held internship positions with Analog Devices and Intel. His research interests include algorithms, architectures, and circuits for energy-efficient learning and inference in smart devices. He was a recipient of the Texas Instruments Stanford Graduate Fellowship in 2012, the Numerical Technologies Founders Prize in 2013, and the John von Neumann Student Research Award in 2015 and 2017.

Prof. Dr. ir. Marian Verhelst is a professor at the MICAS laboratories (MICro-electronics And Sensors) of the Electrical Engineering Department of KU Leuven. Her research focuses on embedded machine learning, energy-efficient hardware accelerators, self-adaptive circuits and systems, and low-power sensing and processing. Before that, she received a PhD from KU Leuven cum ultima laude, she was a visiting scholar at the Berkeley Wireless Research Center (BWRC) of UC Berkeley, and she worked as a research scientist at Intel Labs, Hillsboro OR. Prof. Verhelst is a member of the DATE conference executive committee, and was a member of the ESSCIRC and ISSCC TPCs and of the ISSCC executive committee. Marian is an SSCS Distinguished Lecturer, was a member of the Young Academy of Belgium, an associate editor for TCAS-II and JSSC and a member of the STEM advisory commitee to the Flemish Government. Marian holds a prestigious ERC Grant from the European Union.

Zusammenfassung

Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices

Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy - applications, algorithms, hardware architectures, and circuits - supported by real silicon prototypes

Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations

Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization's implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts

Inhaltsverzeichnis

Chapter 1 Embedded Deep Neural Networks.- Chapter 2 Optimized Hierarchical Cascaded Processing.- Chapter 3 Hardware-Algorithm Co-optimizations.- Chapter 4 Circuit Techniques for Approximate Computing.- Chapter 5 ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing.- Chapter 6 BINAREYE: Digital and Mixed-signal Always-on Binary Neural Network Processing.- Chapter 7 Conclusions, contributions and future work.

Details
Erscheinungsjahr: 2018
Fachbereich: Nachrichtentechnik
Genre: Technik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Inhalt: xvi
206 S.
32 s/w Illustr.
92 farbige Illustr.
206 p. 124 illus.
92 illus. in color.
ISBN-13: 9783319992228
ISBN-10: 3319992228
Sprache: Englisch
Herstellernummer: 978-3-319-99222-8
Ausstattung / Beilage: HC runder Rücken kaschiert
Einband: Gebunden
Autor: Moons, Bert
Verhelst, Marian
Bankman, Daniel
Auflage: 1st ed. 2019
Hersteller: Springer International Publishing
Springer International Publishing AG
Maße: 241 x 160 x 18 mm
Von/Mit: Bert Moons (u. a.)
Erscheinungsdatum: 03.11.2018
Gewicht: 0,506 kg
Artikel-ID: 114096511
Über den Autor

Dr. ir. Bert Moons received the B.S. and M.S. and PhD degree in Electrical Engineering from KU Leuven, Leuven, Belgium in 2011, 2013 and 2018. He performed his PhD research at ESAT-MICAS as an IWT-funded Research Assistant, focusing on energy-scalable and run-time adaptable digital circuits for embedded Deep Learning applications. Bert authored 15+ conference and journal publications, was a Visiting Research Student at Stanford University in the Murmann Mixed-Signal Group and received the SSCS predoctoral achievement award in 2018. Currently he is with Synopsys, as a hardware design architect for the DesignWare EV6x Embedded Vision and Deep Learning processors.

Daniel Bankman received the S.B. degree in electrical engineering from the Massachusetts Institute of Technology, Cambridge, MA in 2012 and the M.S. degree from Stanford University, Stanford, CA in 2015. Since 2012, he has been working toward the Ph.D. degree at Stanford University, focusing on mixed-signal processing for machine learning. He has held internship positions with Analog Devices and Intel. His research interests include algorithms, architectures, and circuits for energy-efficient learning and inference in smart devices. He was a recipient of the Texas Instruments Stanford Graduate Fellowship in 2012, the Numerical Technologies Founders Prize in 2013, and the John von Neumann Student Research Award in 2015 and 2017.

Prof. Dr. ir. Marian Verhelst is a professor at the MICAS laboratories (MICro-electronics And Sensors) of the Electrical Engineering Department of KU Leuven. Her research focuses on embedded machine learning, energy-efficient hardware accelerators, self-adaptive circuits and systems, and low-power sensing and processing. Before that, she received a PhD from KU Leuven cum ultima laude, she was a visiting scholar at the Berkeley Wireless Research Center (BWRC) of UC Berkeley, and she worked as a research scientist at Intel Labs, Hillsboro OR. Prof. Verhelst is a member of the DATE conference executive committee, and was a member of the ESSCIRC and ISSCC TPCs and of the ISSCC executive committee. Marian is an SSCS Distinguished Lecturer, was a member of the Young Academy of Belgium, an associate editor for TCAS-II and JSSC and a member of the STEM advisory commitee to the Flemish Government. Marian holds a prestigious ERC Grant from the European Union.

Zusammenfassung

Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices

Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy - applications, algorithms, hardware architectures, and circuits - supported by real silicon prototypes

Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations

Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization's implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts

Inhaltsverzeichnis

Chapter 1 Embedded Deep Neural Networks.- Chapter 2 Optimized Hierarchical Cascaded Processing.- Chapter 3 Hardware-Algorithm Co-optimizations.- Chapter 4 Circuit Techniques for Approximate Computing.- Chapter 5 ENVISION: Energy-Scalable Sparse Convolutional Neural Network Processing.- Chapter 6 BINAREYE: Digital and Mixed-signal Always-on Binary Neural Network Processing.- Chapter 7 Conclusions, contributions and future work.

Details
Erscheinungsjahr: 2018
Fachbereich: Nachrichtentechnik
Genre: Technik
Rubrik: Naturwissenschaften & Technik
Medium: Buch
Inhalt: xvi
206 S.
32 s/w Illustr.
92 farbige Illustr.
206 p. 124 illus.
92 illus. in color.
ISBN-13: 9783319992228
ISBN-10: 3319992228
Sprache: Englisch
Herstellernummer: 978-3-319-99222-8
Ausstattung / Beilage: HC runder Rücken kaschiert
Einband: Gebunden
Autor: Moons, Bert
Verhelst, Marian
Bankman, Daniel
Auflage: 1st ed. 2019
Hersteller: Springer International Publishing
Springer International Publishing AG
Maße: 241 x 160 x 18 mm
Von/Mit: Bert Moons (u. a.)
Erscheinungsdatum: 03.11.2018
Gewicht: 0,506 kg
Artikel-ID: 114096511
Warnhinweis

Ähnliche Produkte

Ähnliche Produkte