University of Twente Student Theses

Login

Reducing memory requirements for Neural Networks trained for Direction of Arrival Estimation : Making Neural Networks more practical for use on a Low-Cost Real-Time Embedded System

Wesselink, B.H.T. (2021) Reducing memory requirements for Neural Networks trained for Direction of Arrival Estimation : Making Neural Networks more practical for use on a Low-Cost Real-Time Embedded System.

[img] PDF
885kB
Abstract:The Direction of Arrival estimation problem is a common problem in sensor array signal processing. Many estimation algorithms exist, one such algorithm is the maximum likelihood estimator. MLE is a widely used Direction of Arrival estimator and is suitable for use with non-uniform radar antenna arrays. One of the main disadvantages of the MLE algorithm is the fact that its computational requirements scale exponentially with the number of targets to be estimated. In situations where more than two targets have to be estimated, MLE can be very expensive and not practical for use on a low-cost system. More recent research and experiments have shown that artificial neural networks can be used for the Direction of Arrival estimation. The computational requirements of neural networks do not depend on the number of targets to estimate. This causes the networks to have lower required computational power than MLE when a large number of targets have to be estimated. One disadvantage of using neural networks is the fact that their often large topology requires a large number of parameters to be stored. These large memory requirements make them less practically applicable on a low-cost embedded radar system. We are unaware of any published research that mentions and addresses this disadvantage. In this report, we have investigated the memory and computational requirements of different types of neural networks. We also have described techniques to reduce the large memory requirements of these neural networks. Using simulations, we show that the required memory of these neural networks can be reduced up to a factor of 6.8, without losing a significant amount of estimation performance. These results were obtained using two types of neural networks, specifically a fully connected neural network (FCNN) and a residual neural network (ResNet). The reduction in memory usage was obtained by using smaller number formats than the standard IEEE-754 32-bit float. This change was done on a per-layer basis on an already trained network. After changing the number format, the network was evaluated without retraining to ensure that it had no significant impact on estimation performance.
Item Type:Essay (Bachelor)
Faculty:EEMCS: Electrical Engineering, Mathematics and Computer Science
Subject:53 electrotechnology, 54 computer science
Programme:Electrical Engineering BSc (56953)
Link to this item:https://purl.utwente.nl/essays/87314
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page