SoC Provides Neural Network Acceleration


Brainchip has claimed itself as the first company to bring a production spiking neural network architecture to market. Called the Akida Neuromorphic System-on-Chip (NSoC), the device is small, low cost and low power, making it well-suited for edge applications such as advanced driver assistance systems (ADAS), autonomous vehicles, drones, vision-guided robotics, surveillance and machine vision systems. Its scalability allows users to network many Akida devices together to perform complex neural network training and inferencing for many markets including agricultural technology, cybersecurity and financial technology.

According to Lou DiNardo, BrainChip CEO, Akida, which is Greek for ‘spike,’ represents the first in a new breed of hardware solutions for AI. Artificial intelligence at the edge is going to be as significant and prolific as the microcontroller.

The Akida NSoC uses a pure CMOS logic process, ensuring high yields and low cost. Spiking neural networks (SNNs) are inherently lower power than traditional convolutional neural networks (CNNs), as they replace the math-intensive convolutions and back-propagation training methods with biologically inspired neuron functions and feed-forward training methodologies. BrainChip’s research has determined the optimal neuron model and training methods, bringing unprecedented efficiency and accuracy. Each Akida NSoC has effectively 1.2 million neurons and 10 billion synapses, representing 100 times better efficiency than neuromorphic test chips from Intel and IBM. Comparisons to leading CNN accelerator devices show similar performance gains of an order of magnitude better images/second/watt running industry standard benchmarks such as CIFAR-10 with comparable accuracy.

The Akida NSoC is designed for use as a stand-alone embedded accelerator or as a co-processor. It includes sensor interfaces for traditional pixel-based imaging, dynamic vision sensors (DVS), Lidar, audio, and analog signals. It also has high-speed data interfaces such as PCI-Express, USB, and Ethernet. Embedded in the NSoC are data-to-spike converters designed to optimally convert popular data formats into spikes to train and be processed by the Akida Neuron Fabric.

Spiking neural networks are inherently feed-forward dataflows, for both training and inference. Ingrained within the Akida neuron model are innovative training methodologies for supervised and unsupervised training. In the supervised mode, the initial layers of the network train themselves autonomously, while in the final fully-connected layers, labels can be applied, enabling these networks to function as classification networks. The Akida NSoC is designed to allow off-chip training in the Akida Development Environment, or on-chip training. An on-chip CPU is used to control the configuration of the Akida Neuron Fabric as well as off-chip communication of metadata.

The Akida Development Environment is available now for early access customers to begin the creation, training, and testing of spiking neural networks targeting the Akida NSoC. The Akida NSoC is expected to begin sampling in Q3 2019.

SoC Provides Neural Network Acceleration

 


Additional products to consider...