- 0 Introduction
- 1 SNN Instruction Extensions
- 2 Overall View of SNN Unit
- 3 SNN Issue Unit
- 4 Lif Neural Unit
- 5 Synapse Unit
- 6 Configure SNNU in Wenquxing23
Wenquxing23 is a low power consumption SNN processor which is integrated with an
SNN accelerating module to enable the SNN training with back-propagation.
The baseline of Wenquxing23 is Polaris
.
In this document it will introduce the SNN Unit of Wenquxing23 in detail.
Please Check this document for detail.
中文版请查看这个文档。
The Spiking Neural Network Unit (SNNU) is integrated into the pipeline of Polaris
Processor as a sub-component with a configurable number of issues.
This component has a two-stage pipeline: Issue stage and Executive stage.
SNNU includes three parts:
- SNN Issue unit (SNNISU) for the re-decoding of SNN instructions;
- LIF Neuron Unit (LNU) for SIMD sum and updating the LIF-module neuron according to the formula of Leaky Integrate-and-fire (LIF) Module;
- Synapse Unit (SU) for synaptic plasticity and common function computing, including exponential function.
The SNN Issue Unit (SNNISU) decodes RVSNN instructions and reasonably sends instructions to next stage. The operand is divided into 4 16-bit data for SIMD computing in SNNISU.
An SNN register file (SRF) is integrated into SNNISU for temporarily storing some useful parameters. The data in SRF will not participate in the computing of other components, which means, in other words, the data of SRF is only valid in SNNU.
A LIF Neuron Unit can update 4 LIF neurons at the same time. It accept the operands (the divided data) and operator from previous stage. The neuron update follows the LIF simplified formula:
where
There are two structures for membrane potential
The LNU can handle these two structures, which can be configured by setting the
ts_flag
to 1
.
The synapse unit (SU) mainly handles the synaptic plasticity and exponential function computing. SU contains 3 parts:
- Back-Propagation Output direction parameter computing (BPO computing);
- Time Dependence Rule computing (TDR computing);
- EXPonential function computing (EXP computing).
SU achieves the BPO computing according to the following formula:
where
The TDR computing aims to calculate the difference of time stamps (
The EXP computing is realized by using the CORDIC algorithm using 16-bit fixed-point number. The region of convergence is also extended, from (-1.1182, 1.1182) to (-2.079, 2.079).
Both SNNU and SNN instruction extensions can be configured
by changing the setting.scala
chisel file.
Setting the parameter Polaris_SNN_WAY_NUM
can configure the SNNU:
- when
Polaris_SNN_WAY_NUM = 0
, SNNU will not be generated; - when
Polaris_SNN_WAY_NUM = 1
, the project will generate one-way SNNU which only handles one instruction once; - when
Polaris_SNN_WAY_NUM = 2
, the project will generate two-way SNNU which handles two instructions once.
If has any question please contact author