Preface xiAcknowledgment xix1 Overview 11.1 History of Neural Networks 11.2 Neural Networks in Software 21.2.1 Artificial Neural Network 21.2.2 Spiking Neural Network 31.3 Need for Neuromorphic Hardware 31.4 Objectives and Outlines of the Book 5References 82 Fundamentals and Learning of Artificial Neural Networks 112.1 Operational Principles of Artificial Neural Networks 112.1.1 Inference 112.1.2 Learning 132.2 Neural Network Based Machine Learning 162.2.1 Supervised Learning 172.2.2 Reinforcement Learning 202.2.3 Unsupervised Learning 222.2.4 Case Study: Action-Dependent Heuristic Dynamic Programming 232.2.4.1 Actor-Critic Networks 242.2.4.2 On-Line Learning Algorithm 252.2.4.3 Virtual Update Technique 272.3 Network Topologies 312.3.1 Fully Connected Neural Networks 312.3.2 Convolutional Neural Networks 322.3.3 Recurrent Neural Networks 352.4 Dataset and Benchmarks 382.5 Deep Learning 412.5.1 Pre-Deep-Learning Era 412.5.2 The Rise of Deep Learning 412.5.3 Deep Learning Techniques 422.5.3.1 Performance-Improving Techniques 422.5.3.2 Energy-Efficiency-Improving Techniques 462.5.4 Deep Neural Network Examples 50References 533 Artificial Neural Networks in Hardware 613.1 Overview 613.2 General-Purpose Processors 623.3 Digital Accelerators 633.3.1 A Digital ASIC Approach 633.3.1.1 Optimization on Data Movement and Memory Access 633.3.1.2 Scaling Precision 713.3.1.3 Leveraging Sparsity 763.3.2 FPGA-Based Accelerators 803.4 Analog/Mixed-Signal Accelerators 823.4.1 Neural Networks in Conventional Integrated Technology 823.4.1.1 In/Near-Memory Computing 823.4.1.2 Near-Sensor Computing 853.4.2 Neural Network Based on Emerging Non-volatile Memory 883.4.2.1 Crossbar as a Massively Parallel Engine 893.4.2.2 Learning in a Crossbar 913.4.3 Optical Accelerator 933.5 Case Study: An Energy-Efficient Accelerator for Adaptive Dynamic Programming 943.5.1 Hardware Architecture 953.5.1.1 On-Chip Memory 953.5.1.2 Datapath 973.5.1.3 Controller 993.5.2 Design Examples 101References 1084 Operational Principles and Learning in Spiking Neural Networks 1194.1 Spiking Neural Networks 1194.1.1 Popular Spiking Neuron Models 1204.1.1.1 Hodgkin-Huxley Model 1204.1.1.2 Leaky Integrate-and-Fire Model 1214.1.1.3 Izhikevich Model 1214.1.2 Information Encoding 1224.1.3 Spiking Neuron versus Non-Spiking Neuron 1234.2 Learning in Shallow SNNs 1244.2.1 ReSuMe 1244.2.2 Tempotron 1254.2.3 Spike-Timing-Dependent Plasticity 1274.2.4 Learning Through Modulating Weight-Dependent STDP in Two-Layer Neural Networks 1314.2.4.1 Motivations 1314.2.4.2 Estimating Gradients with Spike Timings 1314.2.4.3 Reinforcement Learning Example 1354.3 Learning in Deep SNNs 1464.3.1 SpikeProp 1464.3.2 Stack of Shallow Networks 1474.3.3 Conversion from ANNs 1484.3.4 Recent Advances in Backpropagation for Deep SNNs 1504.3.5 Learning Through Modulating Weight-Dependent STDP in Multilayer Neural Networks 1514.3.5.1 Motivations 1514.3.5.2 Learning Through Modulating Weight-Dependent STDP 1514.3.5.3 Simulation Results 158References 1675 Hardware Implementations of Spiking Neural Networks 1735.1 The Need for Specialized Hardware 1735.1.1 Address-Event Representation 1735.1.2 Event-Driven Computation 1745.1.3 Inference with a Progressive Precision 1755.1.4 Hardware Considerations for Implementing the Weight-Dependent STDP Learning Rule 1815.1.4.1 Centralized Memory Architecture 1825.1.4.2 Distributed Memory Architecture 1835.2 Digital SNNs 1865.2.1 Large-Scale SNN ASICs 1865.2.1.1 SpiNNaker 1865.2.1.2 TrueNorth 1875.2.1.3 Loihi 1915.2.2 Small/Moderate-Scale Digital SNNs 1925.2.2.1 Bottom-Up Approach 1925.2.2.2 Top-Down Approach 1935.2.3 Hardware-Friendly Reinforcement Learning in SNNs 1945.2.4 Hardware-Friendly Supervised Learning in Multilayer SNNs 1995.2.4.1 Hardware Architecture 1995.2.4.2 CMOS Implementation Results 2055.3 Analog/Mixed-Signal SNNs 2105.3.1 Basic Building Blocks 2105.3.2 Large-Scale Analog/Mixed-Signal CMOS SNNs 2115.3.2.1 CAVIAR 2115.3.2.2 BrainScaleS 2145.3.2.3 Neurogrid 2155.3.3 Other Analog/Mixed-Signal CMOS SNN ASICs 2165.3.4 SNNs Based on Emerging Nanotechnologies 2165.3.4.1 Energy-Efficient Solutions 2175.3.4.2 Synaptic Plasticity 2185.3.5 Case Study: Memristor Crossbar Based Learning in SNNs 2205.3.5.1 Motivations 2205.3.5.2 Algorithm Adaptations 2225.3.5.3 Non-idealities 2315.3.5.4 Benchmarks 238References 2386 Conclusions 2476.1 Outlooks 2476.1.1 Brain-Inspired Computing 2476.1.2 Emerging Nanotechnologies 2496.1.3 Reliable Computing with Neuromorphic Systems 2506.1.4 Blending of ANNs and SNNs 2516.2 Conclusions 252References 253A Appendix 257A.1 Hopfield Network 257A.2 Memory Self-Repair with Hopfield Network 258References 266Index 269
NAN ZHENG, PhD, received a B. S. degree in Information Engineering from Shanghai Jiao Tong University, China, in 2011, and an M. S. and PhD in Electrical Engineering from the University of Michigan, Ann Arbor, USA, in 2014 and 2018, respectively. His research interests include low-power hardware architectures, algorithms and circuit techniques with an emphasis on machine-learning applications.PINAKI MAZUMDER, PhD, is a professor in the Department of Electrical Engineering and Computer Science at The University of Michigan, USA. His research interests include CMOS VLSI design, semiconductor memory systems, CAD tools and circuit designs for emerging technologies including quantum MOS, spintronics, spoof plasmonics, and resonant tunneling devices.