Dataflow and Approximation of Neural Network Inference and Training on FPGAs

Researcher
Christoph Berganski
Collaboration
Tatiana Moteu Ngoli
Research Theme
R3 Sustainability & Efficiency
Tags
Neural Networks

The research project aims to explore and compare model compression and approximate computing techniques for deep neural networks, focusing initially on efficient inference acceleration. The key objectives include deploying a transformer-style model on an FPGA using the FINN framework, involving analysis of dataflow, compression techniques, and the implementation of hardware operators. Next steps may shift towards efficiently accelerating neural network training through approximate computing techniques on FPGAs, with potential goals such as hardware implementation of gradient estimation and parameter update algorithms.