An FPGA-Based Hardware Accelerator for CNNs Using On-Chip Memories Only: Design and Benchmarking with Intel Movidius Neural Compute Stick

Joint Authors

Dinelli, Gianmarco
Meoni, Gabriele
Rapuano, Emilio
Benelli, Gionata
Fanucci, Luca

Source

International Journal of Reconfigurable Computing

Issue

Vol. 2019, Issue 2019 (31 Dec. 2019), pp.1-13, 13 p.

Publisher

Hindawi Publishing Corporation

Publication Date

2019-10-22

Country of Publication

Egypt

No. of Pages

13

Main Subjects

Information Technology and Computer Science

Abstract EN

During the last years, convolutional neural networks have been used for different applications, thanks to their potentiality to carry out tasks by using a reduced number of parameters when compared with other deep learning approaches.

However, power consumption and memory footprint constraints, typical of on the edge and portable applications, usually collide with accuracy and latency requirements.

For such reasons, commercial hardware accelerators have become popular, thanks to their architecture designed for the inference of general convolutional neural network models.

Nevertheless, field-programmable gate arrays represent an interesting perspective since they offer the possibility to implement a hardware architecture tailored to a specific convolutional neural network model, with promising results in terms of latency and power consumption.

In this article, we propose a full on-chip field-programmable gate array hardware accelerator for a separable convolutional neural network, which was designed for a keyword spotting application.

We started from the model implemented in a previous work for the Intel Movidius Neural Compute Stick.

For our goals, we appropriately quantized such a model through a bit-true simulation, and we realized a dedicated architecture exclusively using on-chip memories.

A benchmark comparing the results on different field-programmable gate array families by Xilinx and Intel with the implementation on the Neural Compute Stick was realized.

The analysis shows that better inference time and energy per inference results can be obtained with comparable accuracy at expenses of a higher design effort and development time through the FPGA solution.

American Psychological Association (APA)

Dinelli, Gianmarco& Meoni, Gabriele& Rapuano, Emilio& Benelli, Gionata& Fanucci, Luca. 2019. An FPGA-Based Hardware Accelerator for CNNs Using On-Chip Memories Only: Design and Benchmarking with Intel Movidius Neural Compute Stick. International Journal of Reconfigurable Computing،Vol. 2019, no. 2019, pp.1-13.
https://search.emarefa.net/detail/BIM-1168496

Modern Language Association (MLA)

Dinelli, Gianmarco…[et al.]. An FPGA-Based Hardware Accelerator for CNNs Using On-Chip Memories Only: Design and Benchmarking with Intel Movidius Neural Compute Stick. International Journal of Reconfigurable Computing No. 2019 (2019), pp.1-13.
https://search.emarefa.net/detail/BIM-1168496

American Medical Association (AMA)

Dinelli, Gianmarco& Meoni, Gabriele& Rapuano, Emilio& Benelli, Gionata& Fanucci, Luca. An FPGA-Based Hardware Accelerator for CNNs Using On-Chip Memories Only: Design and Benchmarking with Intel Movidius Neural Compute Stick. International Journal of Reconfigurable Computing. 2019. Vol. 2019, no. 2019, pp.1-13.
https://search.emarefa.net/detail/BIM-1168496

Data Type

Journal Articles

Language

English

Notes

Includes bibliographical references

Record ID

BIM-1168496