University of Twente Student Theses

Login
This website will be unavailable due to maintenance December 1st between 8:00 and 12:00 CET.

Temporal Spike Attribution : A Local Feature-Based Explanation for Temporally Coded Spiking Neural Networks

Nguyen, Elisa (2021) Temporal Spike Attribution : A Local Feature-Based Explanation for Temporally Coded Spiking Neural Networks.

[img] PDF
7MB
Abstract:Machine learning algorithms are omnipresent in today's world. They influence what movie one might watch next or which advertisements a person sees. Moreover, AI research is concerned with high-stakes application areas, such as autonomous cars or medical diagnoses. These domains pose specific requirements due to their high-risk nature: In addition to predictive accuracy, models have to be transparent and ensure that their decisions are not discriminating or biased. The definition of performance of artificial intelligence is therefore increasingly extended to requirements of transparency and model interpretability. The field of Interpretable Machine Learning and Explainable Artificial Intelligence concerns methods and models that provide explanations for black-box models. Spiking neural networks (SNN) are the third generation of neural networks and therefore also black-box models. Instead of real-valued computations, SNNs work with analogue signals and generate spikes to transmit information. They are biologically more plausible than current artificial neural networks (ANN) and can inherently process spatio-temporal information. Due to their ability to be directly implemented in hardware, their implementation is more energy-efficient than ANNs. Even though it has been shown that SNNs are as powerful, they have not surpassed ANNs so far. The research community is largely focused on optimising SNNs, while topics related to interpretability and explainability in SNNs are rather unexplored. This research contributes to the field of Explainable AI and SNNs by presenting a novel local feature-based explanation method for spiking neural networks called Temporal Spike Attribution (TSA). TSA combines information from model-internal state variables specific to temporally coded SNNs in an addition and multiplication approach to arrive at a feature attribution formula in two variants, considering only spikes (TSA-S) and also considering non-spikes (TSA-NS). TSA is demonstrated on an openly-available time series classification task with SNNs of different depths and evaluated quantitatively with regard to faithfulness, attribution sufficiency, stability and certainty. Additionally, a user study is conducted to verify the human-comprehensibility of TSA. The results validate TSA explanations as faithful, sufficient, and stable. While TSA-S explanations are more stable, TSA-NS explanations are superior in faithfulness and sufficiency, which suggests relevant information for the model prediction to be in the absence of spikes. Certainty is provided in both variants, and the TSA-S explanations are largely human-comprehensible where the clarity of the explanation is linked to the coherence of the model prediction. TSA-NS, however, seems to assign too much attribution to non-spiking input, leading to incoherent explanations.
Item Type:Essay (Master)
Faculty:EEMCS: Electrical Engineering, Mathematics and Computer Science
Subject:50 technical science in general, 54 computer science
Programme:Interaction Technology MSc (60030)
Link to this item:https://purl.utwente.nl/essays/89110
Export this item as:BibTeX
EndNote
HTML Citation
Reference Manager

 

Repository Staff Only: item control page