Ventricular arrythmias such as premature ventricular contractions (PVCs) are irregular patterns that disrupt the healthy heartbeat. They origin in the ventricles and prohibit the initiation of the heartbeat by the sinoatrial node. Frequent PVCs can pose a danger to the patient, even resulting in sudden cardiac death. Their source can be amongst others a focal source in the heart. For a successful therapy, the challenge is to localize this focal source noninvasively by evaluating the electrocardiogram (ECG) signals of the patient. Many such signals form a body surface potential map which may incorporate measurements from up to 200 electrodes. A previous master thesis at the IBT implemented a machine learning algorithm for the localization task. Previous work showed the central feasibility of this approach . The dataset used in the thesis was artificially generated with simulations and is therefore sufficiently extensive and exactly labeled. The subsequent research question posed in the planned master thesis is the explainability of the decision of this machine learning model.
Explainability and transparency are, especially in the medical field, important but challenging requirements for the applicability of machine learning models in practice. Especially neural networks are known for their fuzzy decision boundaries which make it hard to specify semantically meaningful features in the input space that have determined the decision of the network. A lot of work on explainability has been done in the field of computer vision . Important features in the input space can be identified with gradient and heatmap approaches [3, 4].
This master thesis approaches the explainability of classification decisions of neural networks with respect to the input data. This data consists of ECG-Signals, implemented as images.
 T. Yang, L. Yu, Q. Jin et al., Localization of Origins of Premature Ventricular Contraction by Means of Convolutional Neural Network from 12-Lead ECG, IEEE Transactions on Biomedical Engineering, 2018
 M. Alber, S. Lapuschkin, P. Seeger et al., INNvestigate neural networks!, Journal of Machine Learning Research, 2019
 D. Erhan, Y. Bengio, A. Courville et al., Visualizing Higher-Layer Features of a Deep Network, Technical Report 1341
 M. Zeiler, R. Fergus, Visualizing and understanding convolutional networks, Lecture Notes in Computer Science, 2014