CNN Plus Accelerator IP Core

AI Acceleration Using Low Power FPGAs

During the holiday period (Dec 24 – Jan 4), response times from our Global Support Team may be longer than usual.

The Lattice Semiconductor CNN Plus Accelerator IP Core is a calculation engine for Deep Neural Network with fixed point weight. It calculates full layers of Neural Network including convolution layer, pooling layer, batch normalization layer, and full connect layer by executing sequence code with weight value, which is generated by Lattice sensAI™ Neural Network Compiler. The engine is optimized for convolutional neural network, so it can be used for vision-based application such as classification or object detection and tracking. The IP Core does not require an extra processor; it can perform all required calculations by itself.

The CNN Plus Accelerator IP Core offers three types of implementations: the compact CNN type, which is suitable for small FPGA devices due to its low utilization, the optimized CNN type, which can perform four convolution calculations in parallel, making it suitable for high-speed applications, and the extended CNN type, which offers the same features as optimized CNN plus additional support for max pooling/unpooling with max argument.

Customized Convolutional Neural Network (CNN) IP – CNN Plus IP is a flexible accelerator IP that simplifies implementation of Ultra-Low power AI by leveraging the parallel processing capabilities, distributed memory and DSP resources of Lattice FPGAs.

Configurable Modes of Use – Three modes are available: COMPACT (low perf, smallest footprint), OPTIMIZED (higher perf in resource optimized footprint) and HIGH performance mode(highest perf with biggest footprint).

Easy to Implement – Models trained using common machine learning frameworks such as TensorFlow can be compiled using the Lattice Neural Network Complier Tool and implemented on HW using the CNN Plus Accelerator IP.

Features

  • Selectable three implementation types: Compact CNN, Optimized CNN, Extended CNN
  • Selectable AXI4 or FIFO interface
  • Support for convolution layer, max pooling layer, global average pooling layer, batch normalization layer, and full connect layer
  • Configurable bit width of activation (16/8-bit)
  • Configurable number of memory blocks for tradeoff between resource and performance

Jump to

Block Diagram

Functional Block Diagram of CNN Plus Accelerator (Compact CNN Type)

Functional Block Diagram of CNN Plus Accelerator (Optimized CNN Type)

Ordering Information

  Part Number
Device Family Multi-site Perpetual Single Seat Annual
CrossLink-NX CNNPLUS-ACCEL-CNX-UT CNNPLUS-ACCEL-CNX-US
CertusPro-NX CNNPLUS-ACCEL-CPNX-UT CNNPLUS-ACCEL-CPNX-US
Certus-NX CNNPLUS-ACCEL-CTNX-UT CNNPLUS-ACCEL-CTNX-US

To download a full evaluation version of this IP, go to the IP Server in Lattice Radiant. This IP core supports Lattice’s IP hardware evaluation capability, which makes it possible to generate the IP core and operate in hardware for a limited time (approximately four hours) without requiring an IP license.

To find out how to purchase the CNN Plus Accelerator IP core, please contact your local Lattice Sales Office.

Documentation

Quick Reference
TITLE NUMBER VERSION DATE FORMAT SIZE
Select All
CNN Plus Accelerator IP Core - User Guide
FPGA-IPUG-02115 1.6 12/10/2024 PDF 815.6 KB

*By clicking on the "Notify Me of Changes" button, you agree to receive notifications on changes to the document(s) you selected.