Optimized Artificial Neural Network (ANN) Architecture

Optimized Artificial Neural Network (ANN) Architecture

Details

Category: Arithmetic Core

Created: June 02, 2016

Updated: January 27, 2020

Language: VHDL

Other project properties

Development Status: Beta

Additional info: Specification done

WishBone compliant: No

WishBone version: n/a

License: LGPL

Description

This IP core is a configurable feedforward Artificial Neural Network (ANN). ANNs are Artificial Intelligence (AI) algorithms biologically inspired on the brain. An ANN can be trained to learn a specific task. Typical tasks are pattern recognition and classification, but not exclusively. ANN is based on a simple model of a neuron.
Neuron’s models are grouped by layers are connected in a network. This IP performs full feedforward connections between consecutive layers. All neurons’ outputs of a layer become the inputs for the next layer. This ANN architecture is also known as Multi-Layer Perceptron (MLP) when is trained with a supervised learning algorithm.
Different kinds of activation functions can be added easily coding them in the provided VHDL template.
This IP core is provided in three parts: kernel, Vivado/AXI4 wrapper and GHDL testbench. The kernel is the optimized ANN with basic logic interfaces. The kernel should be instantiated inside a wrapper to connect it with the user’s system buses. Currently, an example wrapper is provided for instantiate it on Xilinx Vivado, which uses AXI4 interfaces for AMBA buses. A GHDL testbench is provided for easy instantiation of the IP-core in projects that don't use a processor core. Additionally the test bench enables weight and bias RAM initialization directly from Matlab/Octave NN toolbox. Developers hope to add more wrappers in the future with the help of the OpenCores community.