Editorial: Understanding and Bridging the Gap Between Neuromorphic Computing and Machine Learning

Abstract
Editorial on the Research Topic Understanding and Bridging the Gap between Neuromorphic Computing and Machine Learning On the road toward artificial general intelligence (AGI), two solution paths have been explored: neuroscience-driven neuromorphic computing such as spiking neural networks (SNNs) and computer-science-driven machine learning such as artificial neural networks (ANNs). Owing to availability of data, high-performance processors, effective learning algorithms, and easy-to-use programming tools, ANNs have achieved tremendous breakthroughs in many intelligent applications. Recently, SNNs also attracted a lot of attention due to its biological plausibility and the possibility of achieving energy-efficiency (Roy et al., 2019). However, they suffer from ongoing debates and skepticisms due to worse accuracy compared to “standard” ANNs. The performance gap comes from a variety of factors, including learning techniques, benchmarks, programming tools and execution hardware, all of which in SNNs are not as developed as those in the ANN domain. To this end, we propose a Research Topic, named “Understanding and Bridging the Gap between Neuromorphic Computing and Machine Learning,” in Frontiers in Neuroscience and Frontiers in Computational Neuroscience to collect recent researches on neuromorphic computing and machine learning to help understand and bridge the aforementioned gap. We received 18 submissions in total and accepted 14 of them in the end. The scope of these accepted papers covers learning algorithms, applications, and efficient hardware. How to train SNN models is the key to improve its functionality, thus bridging the gap between ANN models. Unlike the ANN domain that has grown rapidly via sophisticated backpropagation-based learning algorithms, the SNN domain is still short of effective learning algorithms due to the complicated spatiotemporal dynamics and non-differentiable spike activities. Currently, there are overall two categories of learning algorithms for SNNs: unsupervised synaptic plasticity with biological plausibility [e.g., spike timing dependent plasticity, STDP (Diehl and Cook, 2015)] and supervised backpropagation with gradient descent [e.g., indirect learning by acquiring gradients from the ANN counterpart (Diehl et al., 2015; Sengupta et al., 2019), direct learning by acquiring gradients from the SNN itself (Lee et al., 2016; Wu et al., 2018; Gu et al., 2019; Zheng et al., 2021), or the combination of both (Rathi et al., 2020)]. The latter can usually achieve higher accuracy and has advanced the model scale to handle ImageNet-level tasks. In the future, we look forward to seeing more studies on SNN learning to close the gap. Next, we briefly summarize the recent progress of neural network (especially SNN) learning presented in our accepted papers. Inspired by the curiosity-based learning mechanism of the brain, Shi et al. propose curiosity-based SNN (CBSNN) models. In the first training epoch, the novelty estimations of all samples are obtained through bio-plausible synaptic plasticity; next, the samples whose novelty estimations exceed the threshold are repeatedly learned and the novelty estimations are updated in the next five epochs; then, all samples are learned with one more epoch. The last two steps are periodically taken until convergence. CBSNNs show better accuracy and higher efficiency in processing several small-scale datasets than conventional voltage-driven plasticity-centric SNNs. Daram et al. propose ModNet, an efficient dynamic learning system inspired from the neuromodulatory mechanism in the insect brain. An inbuilt modulatory unit regulates learning based on the context and internal state of the system. The network with modulatory trace achieves 98.8% ± 1.16 on average over the omniglot dataset for five-way one-shot image classification task while using 20x fewer trainable parameters compared to state-of-the-art models. Kaiser, Mostafa et al. introduce deep continuous local learning (DECOLLE), an SNN model equipped with local error functions for online learning. The synaptic plasticity rules are derived from user-defined cost functions and neural dynamics by leveraging existing autodifferentiation methods of machine learning frameworks. The model demonstrates state-of-the-art performance on N-MNIST and DvsGesture datasets. Fang et al. propose a bio-plausible noise structure to optimize the performance of SNNs trained by gradient descent. Through deducing the strict saddle condition for synaptic plasticity, they demonstrate that the noise helps the optimization escape from saddle points on high dimensional domains. The accuracy improvement can reach at least 10% on MNIST and CIFAR10 datasets. Panda et al. modify the SNN configuration with backward residual connections, stochastic softmax, and hybrid artificial-and-spiking neuronal activations. In this way, the previous learning methods are improved with comparable accuracy but large efficiency gains over the ANN counterparts. Unlike the artificial neuron in ANNs, each spiking neuron in SNNs has intrinsic temporal dynamics, which is appropriate for processing sequence information. In this Research Topic, we accepted two papers that discuss SNN applications. Wu et al. explore the first work that uses SNNs for large-vocabulary continuous automatic speech recognition (LVCSR) tasks. Their SNNs demonstrate competitive accuracies on par with their ANN counterparts while consuming only 10 algorithmic timesteps and 0.68× total synaptic operations. They integrate the models into the PyTorch-Kaldi Speech Recognition Toolkit for rapid development. Kugele et al. apply SNNs for processing spatiotemporal event streams (e.g., N-MNIST, CIFAR10-DVS, N-CARS, and DvsGesture datasets). They improve the ANN-to-SNN conversion learning method by introducing connection delays during the pre-training of ANNs to match the propagation delays...

This publication has 22 references indexed in Scilit: