Duke University; University of Notre Dame; Syracuse University
The biological neocortex system is known to have superior energy efficiency and robustness than any exiting computers. This is partly because of its massive parallel, highly distributed yet asynchronous architecture, and closely coupled computation and storage. The neurons communicate using action potential (i.e. spikes) and operates stochastically, which increases resilience to random noise and variations. A bio-inspired implementation with the aforementioned features will have superior energy efficiency and robustness than the traditional synchronous implementation.
The main objective of this project is to develop a holistic approach to achieve bio-inspired deep learning and inference using spiking neural network (SNN).
Qinru Qiu (EECS, SU), Yanzhi Wang (EECS, SU), Yiran Chen (ECE, Duke), Yiyu Shi (CSE, ND)
Experimental Plan and Industrial Relevance
Spiking neural networks (SNNs), which utilize spikes as the basis for operations, is the third generation of neural networks inspired by biological neuron models. The neurons work asynchronously in an event-driven manner. The learning of SNN is based on Spike Timing Dependent Plasticity (STDP), which relies only on local information of individual neurons. The emerging stochastic SNN that generates spikes as a stochastic process is not only more biologically plausible but also enhances unsupervised learning and decision making. It further increases the fault tolerance and noise (delay) resilience of the SNN system as the results no longer depend on the information carried by individual spike but the statistics of a group of spikes.
SNNs are especially suitable for hardware implementations due to potentially very low energy consumption and the sparsity property of spike communications. Novel hardware systems such as IBM TrueNorth neurosynaptic processor has enabled breakthrough in design and application of SNNs. However, these systems have not implemented in-hardware STDP learning, neither do they explore the stochastic nature of neurons for more energy and cost reduction. In our preliminary work, a parallel simulation framework for stochastic SNN has been developed and verified. RTL model of Bayesian neurons in stochastic SNNs have been developed and synthesized. Our next step is to validate its functionality using larger applications, and investigate systematic design optimization method of hardware neuron and stochastic SNN system for performance/energy efficiency/scalability improvements. Finally, to investigate novel Nano-device and memory technology to reduce hardware cost and improve system efficiency. We will also utilize existing SNN hardware, i.e. IBM TrueNorth processor, to realize different machine learning applications.
The proposed research will be of interests to industrial partners that value the reliability and cost efficiency, such as companies working on defense industry, mobile computing, sensor and sensor networks.
By the end of the first year, the proposed deliverable will be the design flow that maps different application specific neural network to SNN and implement them on TrueNorth. The final deliverable of this project will be a holistic system, which includes the network model and learning algorithm of SNN, as well as the hardware implementation of the SNN including its learning and inference functions.
Milestones and Time-to-Completion
The estimated duration of this project is 3 years. The milestones are listed in the following table.
Map ANN to SNN, and SNN on to TrueNorth
Simulated learning and inference model, RTL implementation of the SNN
Incorporate novel devices and memory into the hardware design
Number of Graduate Students Supported
Total Cost to Completion