Atrial fibrillation is one of the most frequent cardiac arrhythmias in the industrialized world and ablation therapy is the method of choice for many patients. However, ablation scars alter the electrophysiological activation and the mechanical behavior of the affected atria. Different ablation strategies with the aim to terminate atrial fibrillation and prevent its recurrence exist but their impact on the hemodynamic performance of the heart has not been investigated thoroughly. In this work, we present a simulation study analyzing five commonly used ablation scar patterns and their combinations in the left atrium regarding their impact on the pumping function of the heart using an electromechanical whole-heart model. We analyzed how the altered atrial activation and increased stiffness due to the ablation scar affect atrial as well as ventricular contraction and relaxation. We found that systolic and diastolic function of the left atrium is impaired by ablation scars and that the reduction of atrial stroke volume of up to 11.43% depends linearly on the amount of inactivated tissue. Consequently, the end-diastolic volume of the left ventricle, and thus stroke volume, was reduced by up to 1.4% and 1.8%, respectively. During ventricular systole, left atrial pressure was increased by up to 20% due to changes in the atrial activation sequence and the stiffening of scar tissue. This study provides biomechanical evidence that atrial ablation has acute effects not only on atrial contraction but also on ventricular pumping function. Our results have the potential to help tailoring ablation strategies towards minimal global hemodynamic impairment.
Objective: The bidomain model and the finite element method are an established standard to mathematically describe cardiac electrophysiology, but are both suboptimal choices for fast and large-scale simulations due to high computational costs. We investigate to what extent simplified approaches for propagation models (monodomain, reaction-eikonal and eikonal) and forward calculation (boundary element and infinite volume conductor) deliver markedly accelerated, yet physiologically accurate simulation results in atrial electrophysiology. Methods: We compared action potential durations, local activation times (LATs), and electrocardiograms (ECGs) for sinus rhythm simulations on healthy and fibrotically infiltrated atrial models. Results: All simplified model solutions yielded LATs and P waves in accurate accordance with the bidomain results. Only for the eikonal model with pre-computed action potential templates shifted in time to derive transmembrane voltages, repolarization behavior notably deviated from the bidomain results. ECGs calculated with the boundary element method were characterized by correlation coefficients >0.9 compared to the finite element method. The infinite volume conductor method led to lower correlation coefficients caused predominantly by systematic overestimations of P wave amplitudes in the precordial leads. Conclusion: Our results demonstrate that the eikonal model yields accurate LATs and combined with the boundary element method precise ECGs compared to markedly more expensive full bidomain simulations. However, for an accurate representation of atrial repolarization dynamics, diffusion terms must be accounted for in simplified models. Significance: Simulations of atrial LATs and ECGs can be notably accelerated to clinically feasible time frames at high accuracy by resorting to the eikonal and boundary element methods.
The assessment of craniofacial deformities requires patient data which is sparsely available. Statistical shape models provide realistic and synthetic data enabling comparisons of existing methods on a common dataset. We build the first publicly available statistical 3D head model of craniosynostosis patients and the first model focusing on infants younger than 1.5 years. For correspondence establishment, we test and evaluate four template morphing approaches. We further present an original, shape-model- based classification approach for craniosynostosis on photogrammetric surface scans. To the best of our knowledge, our study uses the largest dataset of craniosynostosis patients in a classification study for craniosynostosis and statistical shape modeling to date. We demonstrate that our shape model performs similar to other statistical shape models of the human head. Craniosynostosis-specific pathologies are represented in the first eigenmodes of the model. Regarding the automatic classification of craniosynostis, our classification approach yields an accuracy of 97.3 %, comparable to other state-of-the-art methods using both computed tomography scans and stereophotogrammetry. Our publicly available, craniosynostosis-specific statistical shape model enables the assessment of craniosynostosis on realistic and synthetic data. We further present a state-of-the-art shape-model- based classification approach for a radiation-free diagnosis of craniosynostosis.
C. Nagel, M. Schaufelberger, O. Dössel, and A. Loewe. A Bi-atrial Statistical Shape Model as a Basis to Classify Left Atrial Enlargement from Simulated and Clinical 12-Lead ECGs. In Statistical Atlases and Computational Models of the Heart. Multi-Disease, Multi-View, and Multi-Center Right Ventricular Segmentation in Cardiac MRI Challenge, vol. 13131, pp. 38-47, 2022
Left atrial enlargement (LAE) is one of the risk factors for atrial fibrillation (AF). A non-invasive and automated detection of LAE with the 12-lead electrocardiogram (ECG) could therefore contribute to an improved AF risk stratification and an early detection of new-onset AF incidents. However, one major challenge when applying machine learning techniques to identify and classify cardiac diseases usually lies in the lack of large, reliably labeled and balanced clinical datasets. We therefore examined if the extension of clinical training data by simulated ECGs derived from a novel bi-atrial shape model could improve the automated detection of LAE based on P waves of the 12-lead ECG. We derived 95 volumetric geometries from the bi-atrial statistical shape model with continuously increasing left atrial volumes in the range of 30 ml to 65 ml. Electrophysiological simulations with 10 different conduction velocity settings and 2 different torso models were conducted. Extracting the P waves of the 12-lead ECG thus yielded a synthetic dataset of 1,900 signals. Besides the simulated data, 7,168 healthy and 309 LAE ECGs from a public clinical ECG database were available for training and testing of an LSTM network to identify LAE. The class imbalance of the training data could be reduced from 1:23 to 1:6 when adding simulated data to the training set. The accuracy evaluated on the test dataset comprising a subset of the clinical ECG recordings improved from 0.91 to 0.95 if simulated ECGs were included as an additional input for the training of the classifier. Our results suggest that using a bi-atrial statistical shape model as a basis for ECG simulations can help to overcome the drawbacks of clinical ECG recordings and can thus lead to an improved performance of machine learning classifiers to detect LAE based on the 12-lead ECG.
Aims Atrial flutter (AFlut) is a common re-entrant atrial tachycardia driven by self-sustainable mechanisms that cause excitations to propagate along pathways different from sinus rhythm. Intra-cardiac electrophysiological mapping and catheter ablation are often performed without detailed prior knowledge of the mechanism perpetuating AFlut, likely prolonging the procedure time of these invasive interventions. We sought to discriminate the AFlut location [cavotricuspid isthmus-dependent (CTI), peri-mitral, and other left atrium (LA) AFlut classes] with a machine learning-based algorithm using only the non-invasive signals from the 12-lead electrocardiogram (ECG). Methods and results Hybrid 12-lead ECG dataset of 1769 signals was used (1424 in silico ECGs, and 345 clinical ECGs from 115 patients—three different ECG segments over time were extracted from each patient corresponding to single AFlut cycles). Seventy-seven features were extracted. A decision tree classifier with a hold-out classification approach was trained, validated, and tested on the dataset randomly split after selecting the most informative features. The clinical test set comprised 38 patients (114 clinical ECGs). The classifier yielded 76.3% accuracy on the clinical test set with a sensitivity of 89.7%, 75.0%, and 64.1% and a positive predictive value of 71.4%, 75.0%, and 86.2% for CTI, peri-mitral, and other LA class, respectively. Considering majority vote of the three segments taken from each patient, the CTI class was correctly classified at 92%. Conclusion Our results show that a machine learning classifier relying only on non-invasive signals can potentially identify the location of AFlut mechanisms. This method could aid in planning and tailoring patient-specific AFlut treatments.
A. Wachter, and W. Nahm. Workflow Augmentation of Video Data for Event Recognition with Time-Sensitive Neural Networks. In eprint, 2021
Supervised training of neural networks requires large, diverse and well annotated data sets. In the medical field, this is often difficult to achieve due to constraints in time, expert knowledge and prevalence of an event. Artificial data augmentation can help to prevent overfitting and improve the detection of rare events as well as overall performance. However, most augmentation techniques use purely spatial transformations, which are not sufficient for video data with temporal correlations. In this paper, we present a novel methodology for workflow augmentation and demonstrate its benefit for event recognition in cataract surgery. The proposed approach increases the frequency of event alternation by creating artificial videos. The original video is split into event segments and a workflow graph is extracted from the original annotations. Finally, the segments are assembled into new videos based on the workflow graph. Compared to the original videos, the frequency of event alternation in the augmented cataract surgery videos increased by 26%. Further, a 3% higher classification accuracy and a 7.8% higher precision was achieved compared to a state-of-the-art approach. Our approach is particularly helpful to increase the occurrence of rare but important events and can be applied to a large variety of use cases.
BACKGROUND AND OBJECTIVE: Cardiac electrophysiology is a medical specialty with a long and rich tradition of computational modeling. Nevertheless, no community standard for cardiac electrophysiology simulation software has evolved yet. Here, we present the openCARP simulation environment as one solution that could foster the needs of large parts of this community. METHODS AND RESULTS: openCARP and the Python-based carputils framework allow developing and sharing simulation pipelines which automate in silico experiments including all modeling and simulation steps to increase reproducibility and productivity. The continuously expanding openCARP user community is supported by tailored infrastructure. Documentation and training material facilitate access to this complementary research tool for new users. After a brief historic review, this paper summarizes requirements for a high-usability electrophysiology simulator and describes how openCARP fulfills them. We introduce the openCARP modeling workflow in a multi-scale example of atrial fibrillation simulations on single cell, tissue, organ and body level and finally outline future development potential. CONCLUSION: As an open simulator, openCARP can advance the computational cardiac electrophysiology field by making state-of-the-art simulations accessible. In combination with the carputils framework, it offers a tailored software solution for the scientific community and contributes towards increasing use, transparency, standardization and reproducibility of in silico experiments.
H. Anzt, and A. Loewe. Forschungssoftware – Nachhaltige Entwicklung und Bereitstellung. In Forschung & Lehre, vol. 28(5) , pp. 380-381, 2021
The arrhythmogenesis of atrial fibrillation is associated with the presence of fibrotic atrial tissue. Not only fibrosis but also physiological anatomical variability of the atria and the thorax reflect in altered morphology of the P wave in the 12-lead electrocardiogram (ECG). Distinguishing between the effects on the P wave induced by local atrial substrate changes and those caused by healthy anatomical variations is important to gauge the potential of the 12-lead ECG as a non-invasive and cost-effective tool for the early detection of fibrotic atrial cardiomyopathy to stratify atrial fibrillation propensity. In this work, we realized 54,000 combinations of different atria and thorax geometries from statistical shape models capturing anatomical variability in the general population. For each atrial model, 10 different volume fractions (0-45%) were defined as fibrotic. Electrophysiological simulations in sinus rhythm were conducted for each model combination and the respective 12-lead ECGs were computed. P wave features (duration, amplitude, dispersion, terminal force in V1) were extracted and compared between the healthy and the diseased model cohorts. All investigated feature values systematically in- or decreased with the left atrial volume fraction covered by fibrotic tissue, however value ranges overlapped between the healthy and the diseased cohort. Using all extracted P wave features as input values, the amount of the fibrotic left atrial volume fraction was estimated by a neural network with an absolute root mean square error of 8.78%. Our simulation results suggest that although all investigated P wave features highly vary for different anatomical properties, the combination of these features can contribute to non-invasively estimate the volume fraction of atrial fibrosis using ECG-based machine learning approaches.
AIMS: The treatment of atrial fibrillation beyond pulmonary vein isolation has remained an unsolved challenge. Targeting regions identified by different substrate mapping approaches for ablation resulted in ambiguous outcomes. With the effective refractory period being a fundamental prerequisite for the maintenance of fibrillatory conduction, this study aims at estimating the effective refractory period with clinically available measurements. METHODS AND RESULTS: A set of 240 simulations in a spherical model of the left atrium with varying model initialization, combination of cellular refractory properties, and size of a region of lowered effective refractory period was implemented to analyse the capabilities and limitations of cycle length mapping. The minimum observed cycle length and the 25% quantile were compared to the underlying effective refractory period. The density of phase singularities was used as a measure for the complexity of the excitation pattern. Finally, we employed the method in a clinical test of concept including five patients. Areas of lowered effective refractory period could be distinguished from their surroundings in simulated scenarios with successfully induced multi-wavelet re-entry. Larger areas and higher gradients in effective refractory period as well as complex activation patterns favour the method. The 25% quantile of cycle lengths in patients with persistent atrial fibrillation was found to range from 85 to 190 ms. CONCLUSION: Cycle length mapping is capable of highlighting regions of pathologic refractory properties. In combination with complementary substrate mapping approaches, the method fosters confidence to enhance the treatment of atrial fibrillation beyond pulmonary vein isolation particularly in patients with complex activation patterns.
Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.
OBJECTIVE: Atrial flutter (AFl) is a common arrhythmia that can be categorized according to different self-sustained electrophysiological mechanisms. The non-invasive discrimination of such mechanisms would greatly benefit ablative methods for AFl therapy as the driving mechanisms would be described prior to the invasive procedure, helping to guide ablation. In the present work, we sought to implement recurrence quantification analysis (RQA) on 12-lead ECG signals from a computational framework to discriminate different electrophysiological mechanisms sustaining AFl. METHODS: 20 different AFl mechanisms were generated in 8 atrial models and were propagated into 8 torso models via forward solution, resulting in 1,256 sets of 12-lead ECG signals. Principal component analysis was applied on the 12-lead ECGs, and six RQA-based features were extracted from the most significant principal component scores in two different approaches: individual component RQA and spatial reduced RQA. RESULTS: In both approaches, RQA-based features were significantly sensitive to the dynamic structures underlying different AFl mechanisms. Hit rate as high as 67.7% was achieved when discriminating the 20 AFl mechanisms. RQA-based features estimated for a clinical sample suggested high agreement with the results found in the computational framework. CONCLUSION: RQA has been shown an effective method to distinguish different AFl electrophysiological mechanisms in a non-invasive computational framework. A clinical 12-lead ECG used as proof of concept showed the value of both the simulations and the methods. SIGNIFICANCE: The non-invasive discrimination of AFl mechanisms helps to delineate the ablation strategy, reducing time and resources required to conduct invasive cardiac mapping and ablation procedures.
Acute ischemic stroke is a major health problem with a high mortality rate and a high risk for permanent disabilities. Selective brain hypothermia has the neuroprotective potential to possibly lower cerebral harm. A recently developed catheter system enables to combine endovascular blood cooling and thrombectomy using the same endovascular access. By using the penumbral perfusion via leptomeningeal collaterals, the catheter aims at enabling a cold reperfusion, which mitigates the risk of a reperfusion injury. However, cerebral circulation is highly patient-specific and can vary greatly. Since direct measurement of remaining perfusion and temperature decrease induced by the catheter is not possible without additional harm to the patient, computational modeling provides an alternative to gain knowledge about resulting cerebral temperature decrease. In this work, we present a brain temperature model with a realistic division into gray and white matter and consideration of spatially resolved perfusion. Furthermore, it includes detailed anatomy of cerebral circulation with possibility of personalizing on base of real patient anatomy. For evaluation of catheter performance in terms of cold reperfusion and to analyze its general performance, we calculated the decrease in brain temperature in case of a large vessel occlusion in the middle cerebral artery (MCA) for different scenarios of cerebral arterial anatomy. Congenital arterial variations in the circle of Willis had a distinct influence on the cooling effect and the resulting spatial temperature distribution before vessel recanalization. Independent of the branching configurations, the model predicted a cold reperfusion due to a strong temperature decrease after recanalization (1.4-2.2 C after 25 min of cooling, recanalization after 20 min of cooling). Our model illustrates the effectiveness of endovascular cooling in combination with mechanical thrombectomy and its results serve as an adequate substitute for temperature measurement in a clinical setting in the absence of direct intraparenchymal temperature probes.
The contraction of the human heart is a complex process as a consequence of the interaction of internal and external forces. In current clinical routine, the resulting deformation can be imaged during an entire heart beat. However, the active tension development cannot be measured in vivo but may provide valuable diagnostic information. In this work, we present a novel numerical method for solving an inverse problem of cardiac biomechanics-estimating the dynamic active tension field, provided the motion of the myocardial wall is known. This ill-posed non-linear problem is solved using second order Tikhonov regularization in space and time. We conducted a sensitivity analysis by varying the fiber orientation in the range of measurement accuracy. To achieve RMSE <20% of the maximal tension, the fiber orientation needs to be provided with an accuracy of 10°. Also, variation was added to the deformation data in the range of segmentation accuracy. Here, imposing temporal regularization led to an eightfold decrease in the error down to 12%. Furthermore, non-contracting regions representing myocardial infarct scars were introduced in the left ventricle and could be identified accurately in the inverse solution (sensitivity >0.95). The results obtained with non-matching input data are promising and indicate directions for further improvement of the method. In future, this method will be extended to estimate the active tension field based on motion data from clinical images, which could provide important insights in terms of a new diagnostic tool for the identification and treatment of diseased heart tissue.
Background: Hypertrophic cardiomyopathy (HCM) is typically caused by mutations in sarcomeric genes leading to cardiomyocyte disarray, replacement fibrosis, impaired contractility, and elevated filling pressures. These varying tissue properties are associ- ated with certain strain patterns that may allow to establish a diagnosis by means of non-invasive imaging without the necessity of harmful myocardial biopsies or con- trast agent application. With a numerical study, we aim to answer: how the variability in each of these mechanisms contributes to altered mechanics of the left ventricle (LV) and if the deformation obtained in in-silico experiments is comparable to values reported from clinical measurements. Methods: We conducted an in-silico sensitivity study on physiological and pathologi- cal mechanisms potentially underlying the clinical HCM phenotype. The deformation of the four-chamber heart models was simulated using a finite-element mechanical solver with a sliding boundary condition to mimic the tissue surrounding the heart. Furthermore, a closed-loop circulatory model delivered the pressure values acting on the endocardium. Deformation measures and mechanical behavior of the heart mod- els were evaluated globally and regionally. Results: Hypertrophy of the LV affected the course of strain, strain rate, and wall thickening—the root-mean-squared difference of the wall thickening between control (mean thickness 10 mm) and hypertrophic geometries (17 mm) was >10%. A reduc- tion of active force development by 40% led to less overall deformation: maximal radial strain reduced from 26 to 21%. A fivefold increase in tissue stiffness caused a more homogeneous distribution of the strain values among 17 heart segments. Fiber disarray led to minor changes in the circumferential and radial strain. A combination of pathological mechanisms led to reduced and slower deformation of the LV and halved the longitudinal shortening of the LA. Conclusions: This study uses a computer model to determine the changes in LV deformation caused by pathological mechanisms that are presumed to underlay HCM. This knowledge can complement imaging-derived information to obtain a more accu- rate diagnosis of HCM.
The term "In Silico Trial" indicates the use of computer modelling and simulation to evaluate the safety and efficacy of a medical product, whether a drug, a medical device, a diagnostic product or an advanced therapy medicinal product. Predictive models are positioned as new methodologies for the development and the regulatory evaluation of medical products. New methodologies are qualified by regulators such as FDA and EMA through formal processes, where a first step is the definition of the Context of Use (CoU), which is a concise description of how the new methodology is intended to be used in the development and regulatory assessment process. As In Silico Trials are a disruptively innovative class of new methodologies, it is important to have a list of possible CoUs highlighting potential applications for the development of the relative regulatory science. This review paper presents the result of a consensus process that took place in the InSilicoWorld Community of Practice, an online forum for experts in in silico medicine. The experts involved identified 46 descriptions of possible CoUs which were organised into a candidate taxonomy of nine CoU categories. Examples of 31 CoUs were identified in the available literature; the remaining 15 should, for now, be considered speculative.
L. Azzolin, S. Schuler, O. Dössel, and A. Loewe. A Reproducible Protocol to Assess Arrhythmia Vulnerability : Pacing at the End of the Effective Refractory Period.. In Frontiers in Physiology, vol. 12, pp. 656411, 2021
In both clinical and computational studies, different pacing protocols are used to induce arrhythmia and non-inducibility is often considered as the endpoint of treatment. The need for a standardized methodology is urgent since the choice of the protocol used to induce arrhythmia could lead to contrasting results, e.g., in assessing atrial fibrillation (AF) vulnerabilty. Therefore, we propose a novel method-pacing at the end of the effective refractory period (PEERP)-and compare it to state-of-the-art protocols, such as phase singularity distribution (PSD) and rapid pacing (RP) in a computational study. All methods were tested by pacing from evenly distributed endocardial points at 1 cm inter-point distance in two bi-atrial geometries. Seven different atrial models were implemented: five cases without specific AF-induced remodeling but with decreasing global conduction velocity and two persistent AF cases with an increasing amount of fibrosis resembling different substrate remodeling stages. Compared with PSD and RP, PEERP induced a larger variety of arrhythmia complexity requiring, on average, only 2.7 extra-stimuli and 3 s of simulation time to initiate reentry. Moreover, PEERP and PSD were the protocols which unveiled a larger number of areas vulnerable to sustain stable long living reentries compared to RP. Finally, PEERP can foster standardization and reproducibility, since, in contrast to the other protocols, it is a parameter-free method. Furthermore, we discuss its clinical applicability. We conclude that the choice of the inducing protocol has an influence on both initiation and maintenance of AF and we propose and provide PEERP as a reproducible method to assess arrhythmia vulnerability.
A. Wachter, J. Kost, and W. Nahm. Simulation-Based Estimation of the Number of Cameras Required for 3D Reconstruction in a Narrow-Baseline Multi-Camera Setup. In Journal of Imaging, vol. 7(5) , pp. 87, 2021
Graphical visualization systems are a common clinical tool for displaying digital images and three-dimensional volumetric data. These systems provide a broad spectrum of information to support physicians in their clinical routine. For example, the field of radiology enjoys unrestricted options for interaction with the data, since information is pre-recorded and available entirely in digital form. However, some fields, such as microsurgery, do not benefit from this yet. Microscopes, endoscopes, and laparoscopes show the surgical site as it is. To allow free data manipulation and information fusion, 3D digitization of surgical sites is required. We aimed to find the number of cameras needed to add this functionality to surgical microscopes. For this, we performed in silico simulations of the 3D reconstruction of representative models of microsurgical sites with different numbers of cameras in narrow-baseline setups. Our results show that eight independent camera views are preferable, while at least four are necessary for a digital surgical site. In most cases, eight cameras allow the reconstruction of over 99% of the visible part. With four cameras, still over 95% can be achieved. This answers one of the key questions for the development of a prototype microscope. In future, such a system can provide functionality which is unattainable today
Background: Rate-varying S1S2 stimulation protocols can be used for restitution studies to characterize atrial substrate, ionic remodeling, and atrial fibrillation risk. Clinical restitution studies with numerous patients create large amounts of these data. Thus, an automated pipeline to evaluate clinically acquired S1S2 stimulation protocol data necessitates consistent, robust, reproducible, and precise evaluation of local activation times, electrogram amplitude, and conduction velocity. Here, we present the CVAR-Seg pipeline, developed focusing on three challenges: (i) No previous knowledge of the stimulation parameters is available, thus, arbitrary protocols are supported. (ii) The pipeline remains robust under different noise conditions. (iii) The pipeline supports segmentation of atrial activities in close temporal proximity to the stimulation artifact, which is challenging due to larger amplitude and slope of the stimulus compared to the atrial activity. Methods and Results: The S1 basic cycle length was estimated by time interval detection. Stimulation time windows were segmented by detecting synchronous peaks in different channels surpassing an amplitude threshold and identifying time intervals between detected stimuli. Elimination of the stimulation artifact by a matched filter allowed detection of local activation times in temporal proximity. A non-linear signal energy operator was used to segment periods of atrial activity. Geodesic and Euclidean inter electrode distances allowed approximation of conduction velocity. The automatic segmentation performance of the CVAR-Seg pipeline was evaluated on 37 synthetic datasets with decreasing signal-to-noise ratios. Noise was modeled by reconstructing the frequency spectrum of clinical noise. The pipeline retained a median local activation time error below a single sample (1 ms) for signal-to-noise ratios as low as 0 dB representing a high clinical noise level. As a proof of concept, the pipeline was tested on a CARTO case of a paroxysmal atrial fibrillation patient and yielded plausible restitution curves for conduction speed and amplitude. Conclusion: The proposed openly available CVAR-Seg pipeline promises fast, fully automated, robust, and accurate evaluations of atrial signals even with low signal-to-noise ratios. This is achieved by solving the proximity problem of stimulation and atrial activity to enable standardized evaluation without introducing human bias for large data sets.
A. Naber, M. Reiß, and W. Nahm. Transit Time Measurement in Indicator Dilution Curves: Overcoming the Missing Ground Truth and Quantifying the Error. In Frontiers in Physiology, vol. 12, pp. 1-16, 2021
The vascular function of a vessel can be qualitatively and intraoperatively checked by recording the blood dynamics inside the vessel via fluorescence angiography (FA). Although FA is the state of the art in proving the existence of blood flow during interventions such as bypass surgery, it still lacks a quantitative blood flow measurement that could decrease the recurrence rate and postsurgical mortality. Previous approaches show that the measured flow has a significant deviation compared to the gold standard reference (ultrasonic flow meter). In order to systematically address the possible sources of error, we investigated the error in transit time measurement of an indicator. Obtaining in vivo indicator dilution curves with a known ground truth is complex and often not possible. Further, the error in transit time measurement should be quantified and reduced. To tackle both issues, we first computed many diverse indicator dilution curves using an in silico simulation of the indicator’s flow. Second, we post-processed these curves to mimic measured signals. Finally, we fitted mathematical models (parabola, gamma variate, local density random walk, and mono-exponential model) to re-continualize the obtained discrete indicator dilution curves and calculate the time delay of two analytical functions. This re-continualization showed an increase in the temporal accuracy up to a sub-sample accuracy. Thereby, the Local Density Random Walk (LDRW) model performed best using the cross-correlation of the first derivative of both indicator curves with a cutting of the data at 40% of the peak intensity. The error in frames depends on the noise level and is for a signal-to-noise ratio (SNR) of 20dB and a sampling rate of fs = 60 Hz at f−1 · 0.25(±0.18), so this error is smaller than the distance between two consecutive s samples. The accurate determination of the transit time and the quantification of the error allow the calculation of the error propagation onto the flow measurement. Both can assist surgeons as an intraoperative quality check and thereby reduce the recurrence rate and post-surgical mortality.
Mathematical models of the human heart are evolving to become a cornerstone of precision medicine and support clinical decision making by providing a powerful tool to understand the mechanisms underlying pathophysiological conditions. In this study, we present a detailed mathematical description of a fully coupled multi-scale model of the human heart, including electrophysiology, mechanics, and a closed-loop model of circulation. State-of-the-art models based on human physiology are used to describe membrane kinetics, excitation-contraction coupling and active tension generation in the atria and the ventricles. Furthermore, we highlight ways to adapt this framework to patient specific measurements to build digital twins. The validity of the model is demonstrated through simulations on a personalized whole heart geometry based on magnetic resonance imaging data of a healthy volunteer. Additionally, the fully coupled model was employed to evaluate the effects of a typical atrial ablation scar on the cardiovascular system. With this work, we provide an adaptable multi-scale model that allows a comprehensive personalization from ion channels to the organ level enabling digital twin modeling
In patients with atrial fibrillation, intracardiac electrogram signal amplitude is known to decrease with increased structural tissue remodeling, referred to as fibrosis. In addition to the isolation of the pulmonary veins, fibrotic sites are considered a suitable target for catheter ablation. However, it remains an open challenge to find fibrotic areas and to differentiate their density and transmurality. This study aims to identify the volume fraction and transmurality of fibrosis in the atrial substrate. Simulated cardiac electrograms, combined with a generalized model of clinical noise, reproduce clinically measured signals. Our hybrid dataset approach combines and clinical electrograms to train a decision tree classifier to characterize the fibrotic atrial substrate. This approach captures different dynamics of the electrical propagation reflected on healthy electrogram morphology and synergistically combines it with synthetic fibrotic electrograms from experiments. The machine learning algorithm was tested on five patients and compared against clinical voltage maps as a proof of concept, distinguishing non-fibrotic from fibrotic tissue and characterizing the patient's fibrotic tissue in terms of density and transmurality. The proposed approach can be used to overcome a single voltage cut-off value to identify fibrotic tissue and guide ablation targeting fibrotic areas.
Objective: To investigate cardiac activation maps estimated using electrocardiographic imaging and to find methods reducing line-of-block (LoB) artifacts, while preserving real LoBs. Methods: Body surface potentials were computed for 137 simulated ventricular excitations. Subsequently, the inverse problem was solved to obtain extracellular potentials (EP) and transmembrane voltages (TMV). From these, activation times (AT) were estimated using four methods and compared to the ground truth. This process was evaluated with two cardiac mesh resolutions. Factors contributing to LoB artifacts were identified by analyzing the impact of spatial and temporal smoothing on the morphology of source signals. Results: AT estimation using a spatiotemporal derivative performed better than using a temporal derivative. Compared to deflection-based AT estimation, correlation-based methods were less prone to LoB artifacts but performed worse in identifying real LoBs. Temporal smoothing could eliminate artifacts for TMVs but not for EPs, which could be linked to their temporal morphology. TMVs led to more accurate ATs on the septum than EPs. Mesh resolution had a negligible effect on inverse reconstructions, but small distances were important for cross-correlation-based estimation of AT delays. Conclusion: LoB artifacts are mainly caused by the inherent spatial smoothing effect of the inverse reconstruction. Among the configurations evaluated, only deflection-based AT estimation in combination with TMVs and strong temporal smoothing can prevent LoB artifacts, while preserving real LoBs. Significance: Regions of slow conduction are of considerable clinical interest and LoB artifacts observed in non-invasive ATs can lead to misinterpretations. We addressed this problem by identifying factors causing such artifacts and methods to reduce them.
Aims Atrial cardiomyopathy (ACM) is associated with new-onset atrial fibrillation, arrhythmia recurrence after pulmonary vein isolation (PVI) and increased risk for stroke. At present, diagnosis of ACM is feasible by endocardial contact mapping of left atrial (LA) low-voltage substrate (LVS) or late gadolinium-enhanced magnetic resonance imaging, but their complexity limits a widespread use. The aim of this study was to assess non-invasive body surface electrocardiographic imaging (ECGI) as a novel clinical tool for diagnosis of ACM compared with endocardial mapping. Methods and results Thirty-nine consecutive patients (66 ± 9 years, 85% male) presenting for their first PVI for persistent atrial fibrillation underwent ECGI in sinus rhythm using a 252-electrode-array mapping system. Subsequently, high-density LA voltage and biatrial activation maps (mean 2090 ± 488 sites) were acquired in sinus rhythm prior to PVI. Freedom from arrhythmia recurrence was assessed within 12 months follow-up. Increased duration of total atrial conduction time (TACT) in ECGI was associated with both increased atrial activation time and extent of LA-LVS in endocardial contact mapping (r = 0.77 and r = 0.66, P < 0.0001 respectively). Atrial cardiomyopathy was found in 23 (59%) patients. A TACT value of 148 ms identified ACM with 91.3% sensitivity and 93.7% specificity. Arrhythmia recurrence occurred in 15 (38%) patients during a follow-up of 389 ± 55 days. Freedom from arrhythmia was significantly higher in patients with a TACT <148 ms compared with patients with a TACT ≥148 ms (82.4% vs. 45.5%, P = 0.019). Conclusion Analysis of TACT in non-invasive ECGI allows diagnosis of patients with ACM, which is associated with a significantly increased risk for arrhythmia recurrence following PVI.
S. Schuler, N. Pilia, D. Potyagaylo, and A. Loewe. Cobiveco: Consistent biventricular coordinates for precise and intuitive description of position in the heart – with MATLAB implementation. In Medical Image Analysis, vol. 74, pp. 102247, 2021
Ventricular coordinates are widely used as a versatile tool for various applications that benefit from a description of local position within the heart. However, the practical usefulness of ventricular coordinates is determined by their ability to meet application-specific requirements. For regression-based estimation of biventricular position, for example, a symmetric definition of coordinate directions in both ventricles is important. For the transfer of data between different hearts as another use case, the consistency of coordinate values across different geometries is particularly relevant. To meet these requirements, we compare different approaches to compute coordinates and present Cobiveco, a symmetric, consistent and intuitive biventricular coordinate system that builds upon existing coordinate systems, but overcomes some of their limitations. A novel one-way transfer error is introduced to assess the consistency of the coordinates. Normalized distances along bijective trajectories between two boundaries were found to be superior to solutions of Laplace’s equation for defining coordinate values, as they show better linearity in space. Evaluation of transfer and linearity errors on 36 patient geometries revealed a more than 4-fold improvement compared to a state-of-the-art method. Finally, we show two application examples underlining the relevance for cardiac data processing. Cobiveco MATLAB code is available under a permissive open-source license.
The ECG is one of the most commonly used non-invasive tools to gain insights into the electrical functioning of the heart. It has been crucial as a foundation in the creation and validation of in silico models describing the underlying electrophysiological processes. However, so far, the contraction of the heart and its influences on the ECG have mainly been overlooked in in silico models. As the heart contracts and moves, so do the electrical sources within the heart responsible for the signal on the body surface, thus potentially altering the ECG. To illuminate these aspects, we developed a human 4-chamber electro-mechanically coupled whole heart in silico model and embedded it within a torso model. Our model faithfully reproduces measured 12-lead ECG traces, circulatory characteristics, as well as physiological ventricular rotation and atrioventricular valve plane displacement. We compare our dynamic model to three non-deforming ones in terms of standard clinically used ECG leads (Einthoven and Wilson) and body surface potential maps (BSPM). The non-deforming models consider the heart at its ventricular end-diastatic, end-diastolic and end-systolic states. The standard leads show negligible differences during P-Wave and QRS-Complex, yet during T-Wave the leads closest to the heart show prominent differences in amplitude. When looking at the BSPM, there are no notable differences during the P-Wave, but effects of cardiac motion can be observed already during the QRS-Complex, increasing further during the T-Wave. We conclude that for the modeling of activation (P-Wave/QRS-Complex), the associated effort of simulating a complete electro-mechanical approach is not worth the computational cost. But when looking at ventricular repolarization (T-Wave) in standard leads as well as BSPM, there are areas where the signal can be influenced by cardiac motion of the heart to an extent that should not be ignored.
We aim to provide a critical appraisal of basic concepts underlying signal recording and processing technologies applied for (i) atrial fibrillation (AF) mapping to unravel AF mechanisms and/or identifying target sites for AF therapy and (ii) AF detection, to optimize usage of technologies, stimulate research aimed at closing knowledge gaps, and developing ideal AF recording and processing technologies. Recording and processing techniques for assessment of electrical activity during AF essential for diagnosis and guiding ablative therapy including body surface electrocardiograms (ECG) and endo- or epicardial electrograms (EGM) are evaluated. Discussion of (i) differences in uni-, bi-, and multi-polar (omnipolar/Laplacian) recording modes, (ii) impact of recording technologies on EGM morphology, (iii) global or local mapping using various types of EGM involving signal processing techniques including isochronal-, voltage- fractionation-, dipole density-, and rotor mapping, enabling derivation of parameters like atrial rate, entropy, conduction velocity/direction, (iv) value of epicardial and optical mapping, (v) AF detection by cardiac implantable electronic devices containing various detection algorithms applicable to stored EGMs, (vi) contribution of machine learning (ML) to further improvement of signals processing technologies. Recording and processing of EGM (or ECG) are the cornerstones of (body surface) mapping of AF. Currently available AF recording and processing technologies are mainly restricted to specific applications or have technological limitations. Improvements in AF mapping by obtaining highest fidelity source signals (e.g. catheter–electrode combinations) for signal processing (e.g. filtering, digitization, and noise elimination) is of utmost importance. Novel acquisition instruments (multi-polar catheters combined with improved physical modelling and ML techniques) will enable enhanced and automated interpretation of EGM recordings in the near future.
Atrial flutter (AFL) is a common atrial arrhythmia typically characterized by electrical activity propagating around specific anatomical regions. It is usually treated with catheter ablation. However, the identification of rotational activities is not straightforward, and requires an intense effort during the first phase of the electrophysiological (EP) study, i.e., the mapping phase, in which an anatomical 3D model is built and electrograms (EGMs) are recorded. In this study, we modeled the electrical propagation pattern of AFL (measured during mapping) using network theory (NT), a well-known field of research from the computer science domain. The main advantage of NT is the large number of available algorithms that can efficiently analyze the network. Using directed network mapping, we employed a cycle-finding algorithm to detect all cycles in the network, resembling the main propagation pattern of AFL. The method was tested on two subjects in sinus rhythm, six in an experimental model of in-silico simulations, and 10 subjects diagnosed with AFL who underwent a catheter ablation. The algorithm correctly detected the electrical propagation of both sinus rhythm cases and in-silico simulations. Regarding the AFL cases, arrhythmia mechanisms were either totally or partially identified in most of the cases (8 out of 10), i.e., cycles around the mitral valve, tricuspid valve and figure-of-eight reentries. The other two cases presented a poor mapping quality or a major complexity related to previous ablations, large areas of fibrotic tissue, etc. Directed network mapping represents an innovative tool that showed promising results in identifying AFL mechanisms in an automatic fashion. Further investigations are needed to assess the reliability of the method in different clinical scenarios.
The human heart is a masterpiece of the highest complexity coordinating multi-physics aspects on a multi-scale range. Thus, modeling the cardiac function to reproduce physiological characteristics and diseases remains challenging. Especially the complex simulation of the blood's hemodynamics and its interaction with the myocardial tissue requires a high accuracy of the underlying computational models and solvers. These demanding aspects make whole-heart fully-coupled simulations computationally highly expensive and call for simpler but still accurate models. While the mechanical deformation during the heart cycle drives the blood flow, less is known about the feedback of the blood flow onto the myocardial tissue. To solve the fluid-structure interaction problem, we suggest a cycle-to-cycle coupling of the structural deformation and the fluid dynamics. In a first step, the displacement of the endocardial wall in the mechanical simulation serves as a unidirectional boundary condition for the fluid simulation. After a complete heart cycle of fluid simulation, a spatially resolved pressure factor (PF) is extracted and returned to the next iteration of the solid mechanical simulation, closing the loop of the iterative coupling procedure. All simulations were performed on an individualized whole heart geometry. The effect of the sequential coupling was assessed by global measures such as the change in deformation and-as an example of diagnostically relevant information-the particle residence time. The mechanical displacement was up to 2 mm after the first iteration. In the second iteration, the deviation was in the sub-millimeter range, implying that already one iteration of the proposed cycle-to-cycle coupling is sufficient to converge to a coupled limit cycle. Cycle-to-cycle coupling between cardiac mechanics and fluid dynamics can be a promising approach to account for fluid-structure interaction with low computational effort. In an individualized healthy whole-heart model, one iteration sufficed to obtain converged and physiologically plausible results.
Electrical impedance tomography is clinically used to trace ventilation related changes in electrical conductivity of lung tissue. Estimating regional pulmonary perfusion using electrical impedance tomography is still a matter of research. To support clinical decision making, reliable bedside information of pulmonary perfusion is needed. We introduce a method to robustly detect pulmonary perfusion based on indicator-enhanced electrical impedance tomography and validate it by dynamic multidetector computed tomography in two experimental models of acute respiratory distress syndrome. The acute injury was induced in a sublobar segment of the right lung by saline lavage or endotoxin instillation in eight anesthetized mechanically ventilated pigs. For electrical impedance tomography measurements, a conductive bolus (10% saline solution) was injected into the right ventricle during breath hold. Electrical impedance tomography perfusion images were reconstructed by linear and normalized Gauss-Newton reconstruction on a finite element mesh with subsequent element-wise signal and feature analysis. An iodinated contrast agent was used to compute pulmonary blood flow via dynamic multidetector computed tomography. Spatial perfusion was estimated based on first-pass indicator dilution for both electrical impedance and multidetector computed tomography and compared by Pearson correlation and Bland-Altman analysis. Strong correlation was found in dorsoventral (r = 0.92) and in right-to-left directions (r = 0.85) with good limits of agreement of 8.74% in eight lung segments. With a robust electrical impedance tomography perfusion estimation method, we found strong agreement between multidetector computed and electrical impedance tomography perfusion in healthy and regionally injured lungs and demonstrated feasibility of electrical impedance tomography perfusion imaging.
The electrocardiogram (ECG) is a standard cost-efficient and non-invasive tool for the early detection of various cardiac diseases. Quantifying different timing and amplitude features of and in between the single ECG waveforms can reveal important information about the underlying (dys-)function of the heart. Determining these features requires the detection of fiducial points that mark the on- and offset as well as the peak of each ECG waveform (P wave, QRS complex, T wave). Manually setting these points is time-consuming and requires a physician’s expert knowledge. Therefore, the highly modular ECGdeli toolbox for MATLAB was developed, which is capable of filtering clinically recorded 12-lead ECG signals and detecting the fiducial points, also called delineation. It is one of the few open toolboxes offering ECG delineation for P waves, T Waves and QRS complexes. The algorithms provided were evaluated with the QT database, an ECG database comprising 105 signals with fiducial points annotated by clinicians. The median difference between the fiducial points set by the boundary detection algorithm and the clinical annotations serving as a ground truth is less than 4 samples (16 ms) for the P wave and the QRS complex markers.
Computer modeling of the electrophysiology of the heart has undergone significant progress. A healthy heart can be modeled starting from the ion channels via the spread of a depolarization wave on a realistic geometry of the human heart up to the potentials on the body surface and the ECG. Research is advancing regarding modeling diseases of the heart. This article reviews progress in calculating and analyzing the corresponding electrocardiogram (ECG) from simulated depolarization and repolarization waves. First, we describe modeling of the P-wave, the QRS complex and the T-wave of a healthy heart. Then, both the modeling and the corresponding ECGs of several important diseases and arrhythmias are delineated: ischemia and infarction, ectopic beats and extrasystoles, ventricular tachycardia, bundle branch blocks, atrial tachycardia, flutter and fibrillation, genetic diseases and channelopathies, imbalance of electrolytes and drug-induced changes. Finally, we outline the potential impact of computer modeling on ECG interpretation. Computer modeling can contribute to a better comprehension of the relation between features in the ECG and the underlying cardiac condition and disease. It can pave the way for a quantitative analysis of the ECG and can support the cardiologist in identifying events or non-invasively localizing diseased areas. Finally, it can deliver very large databases of reliably labeled ECGs as training data for machine learning.
C. Nagel, S. Schuler, O. Dössel, and A. Loewe. A bi-atrial statistical shape model for large-scale in silico studies of human atria: model development and application to ECG simulations. In Medical Image Analysis, vol. 74, pp. 102210, 2021
Large-scale electrophysiological simulations to obtain electrocardiograms (ECG) carry the potential to pro- duce extensive datasets for training of machine learning classifiers to, e.g., discriminate between different cardiac pathologies. The adoption of simulations for these purposes is limited due to a lack of ready-to- use models covering atrial anatomical variability. We built a bi-atrial statistical shape model (SSM) of the endocardial wall based on 47 segmented human CT and MRI datasets using Gaussian process morphable models. Generalization, specificity, and compact- ness metrics were evaluated. The SSM was applied to simulate atrial ECGs in 100 random volumetric instances. The first eigenmode of our SSM reflects a change of the total volume of both atria, the second the asym- metry between left vs. right atrial volume, the third a change in the prominence of the atrial appendages. The SSM is capable of generalizing well to unseen geometries and 95% of the total shape variance is cov- ered by its first 24 eigenvectors. The P waves in the 12-lead ECG of 100 random instances showed a duration of 109 . 7 ±12 . 2 ms in accordance with large cohort studies. The novel bi-atrial SSM itself as well as 100 exemplary instances with rule-based augmentation of atrial wall thickness, fiber orientation, inter-atrial bridges and tags for anatomical structures have been made publicly available. This novel, openly available bi-atrial SSM can in future be employed to generate large sets of realistic atrial geometries as a basis for in silico big data approaches.
Background In acute respiratory distress syndrome (ARDS), non-ventilated perfused regions coexist with non-perfused ventilated regions within lungs. The number of unmatched regions might reflect ARDS severity and affect the risk of ventilation-induced lung injury. Despite pathophysiological relevance, unmatched ventilation and perfusion are not routinely assessed at the bedside. The aims of this study were to quantify unmatched ventilation and perfusion at the bedside by electrical impedance tomography (EIT) investigating their association with mortality in patients with ARDS and to explore the effects of positive end-expiratory pressure (PEEP) on unmatched ventilation and perfusion in subgroups of patients with different ARDS severity based on PaO2/FiO(2) and compliance. Methods Prospective observational study in 50 patients with mild (36%), moderate (46%), and severe (18%) ARDS under clinical ventilation settings. EIT was applied to measure the regional distribution of ventilation and perfusion using central venous bolus of saline 5% during end-inspiratory pause. We defined unmatched units as the percentage of only ventilated units plus the percentage of only perfused units. Results Percentage of unmatched units was significantly higher in non-survivors compared to survivors (32[27-47]% vs. 21[17-27]%, p < 0.001). Percentage of unmatched units was an independent predictor of mortality (OR 1.22, 95% CI 1.07-1.39, p = 0.004) with an area under the ROC curve of 0.88 (95% CI 0.79-0.97, p < 0.001). The percentage of ventilation to the ventral region of the lung was higher than the percentage of ventilation to the dorsal region (32 [27-38]% vs. 18 [13-21]%, p < 0.001), while the opposite was true for perfusion (28 [22-38]% vs. 36 [32-44]%, p < 0.001). Higher percentage of only perfused units was correlated with lower dorsal ventilation (r = - 0.486, p < 0.001) and with lower PaO2/FiO(2) ratio (r = - 0.293, p = 0.039). Conclusions EIT allows bedside assessment of unmatched ventilation and perfusion in mechanically ventilated patients with ARDS. Measurement of unmatched units could identify patients at higher risk of death and could guide personalized treatment.
Cranio-maxillofacial surgery often alters the aesthetics of the face which can be a heavy burden for patients to decide whether or not to undergo surgery. Today, physicians can predict the post-operative face using surgery planning tools to support the patient’s decision-making. While these planning tools allow a simulation of the post-operative face, the facial texture must usually be captured by another 3D texture scan and subsequently mapped on the simulated face. This approach often results in face predictions that do not appear realistic or lively looking and are therefore ill-suited to guide the patient’s decision-making. Instead, we propose a method using a generative adversarial network to modify a facial image according to a 3D soft-tissue estimation of the post-operative face. To circumvent the lack of available data pairs between pre- and post-operative measurements we propose a semi-supervised training strategy using cycle losses that only requires paired open-source data of images and 3D surfaces of the face’s shape. After training on “in-the-wild” images we show that our model can realistically manipulate local regions of a face in a 2D image based on a modified 3D shape. We then test our model on four clinical examples where we predict the post-operative face according to a 3D soft-tissue prediction of surgery outcome, which was simulated by a surgery planning tool. As a result, we aim to demonstrate the potential of our approach to predict realistic post-operative images of faces without the need of paired clinical data, physical models, or 3D texture scans.
J. Sánchez, B. Trenor, J. Saiz, O. Dössel, and A. Loewe. Fibrotic Remodeling during Persistent Atrial Fibrillation: In Silico Investigation of the Role of Calcium for Human Atrial Myofibroblast Electrophysiology. In Cells, vol. 10(11) , pp. 2852, 2021
During atrial fibrillation, cardiac tissue undergoes different remodeling processes at different scales from the molecular level to the tissue level. One central player that contributes to both electrical and structural remodeling is the myofibroblast. Based on recent experimental evidence on myofibroblasts’ ability to contract, we extended a biophysical myofibroblast model with Ca2+ handling components and studied the effect on cellular and tissue electrophysiology. Using genetic algorithms, we fitted the myofibroblast model parameters to the existing in vitro data. In silico experiments showed that Ca2+ currents can explain the experimentally observed variability regarding the myofibroblast resting membrane potential. The presence of an L-type Ca2+ current can trigger automaticity in the myofibroblast with a cycle length of 799.9 ms. Myocyte action potentials were prolonged when coupled to myofibroblasts with Ca2+ handling machinery. Different spatial myofibroblast distribution patterns increased the vulnerable window to induce arrhythmia from 12 ms in non-fibrotic tissue to 22 ± 2.5 ms and altered the reentry dynamics. Our findings suggest that Ca2+ handling can considerably affect myofibroblast electrophysiology and alter the electrical propagation in atrial tissue composed of myocytes coupled with myofibroblasts. These findings can inform experimental validation experiments to further elucidate the role of myofibroblast Ca2+ handling in atrial arrhythmogenesis.
This contribution is part of a project concerning the creation of an artificial dataset comprising 3D head scans of craniosynostosis patients for a deep-learning-based classification. To conform to real data, both head and neck are required in the 3D scans. However, during patient recording, the neck is often covered by medical staff. Simply pasting an arbitrary neck leaves large gaps in the 3D mesh. We therefore use a publicly available statistical shape model (SSM) for neck reconstruction. However, most SSMs of the head are constructed using healthy subjects, so the full head reconstruction loses the craniosynostosis-specific head shape. We propose a method to recover the neck while keeping the pathological head shape intact. We propose a Laplace- Beltrami-based refinement step to deform the posterior mean shape of the full head model towards the pathological head. The artificial neck is created using the publicly available Liverpool-Y ork-Model. W e apply our method to construct artificial necks for head scans of 50 scaphocephaly patients. Our method reduces mean vertex correspondence error by approximately 1.3 mm compared to the ordinary posterior mean shape, preserves the pathological head shape, and creates a continuous transition between neck and head. The presented method showed good results for reconstructing a plausible neck to craniosynostosis patients. Easily generalized it might also be applicable to other pathological shapes.
Diseases caused by alterations of ionic concentrations are frequently observed challenges and play an important role in clinical practice. The clinically established method for the diagnosis of electrolyte concentration imbalance is blood tests. A rapid and non-invasive point-of-care method is yet needed. The electrocardiogram (ECG) could meet this need and becomes an established diagnostic tool allowing home monitoring of the electrolyte concentration also by wearable devices. In this review, we present the current state of potassium and calcium concentration monitoring using the ECG and summarize results from previous work. Selected clinical studies are presented, supporting or questioning the use of the ECG for the monitoring of electrolyte concentration imbalances. Differences in the findings from automatic monitoring studies are discussed, and current studies utilizing machine learning are presented demonstrating the potential of the deep learning approach. Furthermore, we demonstrate the potential of computational modeling approaches to gain insight into the mechanisms of relevant clinical findings and as a tool to obtain synthetic data for methodical improvements in monitoring approaches.
A. Loewe. Ein digitales Herz. In Spektrum der Wissenschaft, vol. 20(10) , pp. 44-48, 2020
The solution of the inverse problem of electrocardiology allows the reconstruction of the spatial distribution of the electrical activity of the heart from the body surface electrocardiogram (electrocardiographic imaging, ECGI). ECGI using the equivalent dipole layer (EDL) model has shown to be accurate for cardiac activation times. However, validation of this method to determine repolarization times is lacking. In the present study, we determined the accuracy of the EDL model in reconstructing cardiac repolarization times, and assessed the robustness of the method under less ideal conditions (addition of noise and errors in tissue conductivity). A monodomain model was used to determine the transmembrane potentials in three different excitation-repolarization patterns (sinus beat and ventricular ectopic beats) as the gold standard. These were used to calculate the body surface ECGs using a finite element model. The resulting body surface electrograms (ECGs) were used as input for the EDL-based inverse reconstruction of repolarization times. The reconstructed repolarization times correlated well (COR > 0.85) with the gold standard, with almost no decrease in correlation after adding errors in tissue conductivity of the model or noise to the body surface ECG. Therefore, ECGI using the EDL model allows adequate reconstruction of cardiac repolarization times. Graphical abstract Validation of electrocardiographic imaging for repolarization using forward calculated body surface ECGs from simulated activation-repolarization sequences.
BACKGROUND: Using cardiovascular magnetic resonance imaging (CMR), it is possible to detect diffuse fibrosis of the left ventricle (LV) in patients with atrial fibrillation (AF), which may be independently associated with recurrence of AF after ablation. By conducting CMR, clinical, electrophysiology and biomarker assessment we planned to investigate LV myocardial fibrosis in patients undergoing AF ablation. METHODS: LV fibrosis was assessed by T1 mapping in 31 patients undergoing percutaneous ablation for AF. Galectin-3, coronary sinus type I collagen C terminal telopeptide (ICTP), and type III procollagen N terminal peptide were measured with ELISA. Comparison was made between groups above and below the median for LV extracellular volume fraction (ECV), followed by regression analysis. RESULTS: On linear regression analysis LV ECV had significant associations with invasive left atrial pressure (Beta 0.49, P = 0.008) and coronary sinus ICTP (Beta 0.75, P < 0.001), which remained significant on multivariable regression. CONCLUSION: LV fibrosis in patients with AF is associated with left atrial pressure and invasively measured levels of ICTP turnover biomarker.
S. Pollnow, G. Schwaderlapp, A. Loewe, and O. Dössel. Monitoring the dynamics of acute radiofrequency ablation lesion formation in thin-walled atria – a simultaneous optical and electrical mapping study. In Biomedical Engineering / Biomedizinische Technik, vol. 65(3) , pp. 327-341, 2020
Background Radiofrequency ablation (RFA) is a common approach to treat cardiac arrhythmias. During this intervention, numerous strategies are applied to indirectly estimate lesion formation. However, the assessment of the spatial extent of these acute injuries needs to be improved in order to create well-defined and durable ablation lesions. Methods We investigated the electrophysiological characteristics of rat atrial myocardium during an ex vivo RFA procedure with fluorescence-optical and electrical mapping. By analyzing optical data, the temporal growth of punctiform ablation lesions was reconstructed after stepwise RFA sequences. Unipolar electrograms (EGMs) were simultaneously recorded by a multielectrode array (MEA) before and after each RFA sequence. Based on the optical results, we searched for electrical features to delineate these lesions from healthy myocardium. Results Several unipolar EGM parameters were monotonically decreasing when distances between the electrode and lesion boundary were smaller than 2 mm. The negative component of the unipolar EGM [negative peak amplitude (Aneg)] vanished for distances lesser than 0.4 mm to the lesion boundary. Median peak-to-peak amplitude (Vpp) was decreased by 75% compared to baseline. Conclusion Aneg and Vpp are excellent parameters to discriminate the growing lesion area from healthy myocardium. The experimental setup opens new opportunities to investigate EGM characteristics of more complex ablation lesions.
The electrophysiological mechanism of the sinus node automaticity was previously considered exclusively regulated by the so-called "funny current". However, parallel investigations increasingly emphasized the importance of the Ca-homeostasis and Na/Ca exchanger (NCX). Recently, increasing experimental evidence, as well as insight through mechanistic modeling demonstrates the crucial role of the exchanger in sinus node pacemaking. NCX had a key role in the exciting story of discovery of sinus node pacemaking mechanisms, which recently settled with a consensus on the coupled-clock mechanism after decades of debate. This review focuses on the role of the Na/Ca exchanger from the early results and concepts to recent advances and attempts to give a balanced summary of the characteristics of the local, spontaneous, and rhythmic Ca releases, the molecular control of the NCX and its role in the fight-or-flight response. Transgenic animal models and pharmacological manipulation of intracellular Ca concentration and/or NCX demonstrate the pivotal function of the exchanger in sinus node automaticity. We also highlight where specific hypotheses regarding NCX function have been derived from computational modeling and require experimental validation. Nonselectivity of NCX inhibitors and the complex interplay of processes involved in Ca handling render the design and interpretation of these experiments challenging.
L. Azzolin, L. Dedè, A. Gerbi, and A. Quarteroni. Effect of fibre orientation and bulk modulus on the electromechanical modelling of human ventricles. In Mathematics in Engineering, vol. 2(4) , pp. 614-638, 2020
This work concerns the mathematical and numerical modeling of the heart. The aim is to enhance the understanding of the cardiac function in both physiological and pathological conditions. Along this road, a challenge arises from the multi-scale and multi-physics nature of the mathematical problem at hand. In this paper, we propose an electromechanical model that, in bi-ventricle geometries, combines the monodomain equation, the Bueno-Orovio minimal ionic model, and the Holzapfel-Ogden strain energy function for the passive myocardial tissue modelling together with the active strain approach combined with a model for the transmurally heterogeneous thickening of the myocardium. Since the distribution of the electric signal is dependent on the fibres orientation of the ventricles, we use a Laplace-Dirichlet Rule-Based algorithm to determine the myocardial fibres and sheets configuration in the whole bi-ventricle. In this paper, we study the influence of different fibre directions and incompressibility constraint and penalization on the compressibility of the material (bulk modulus) on the pressure-volume relation simulating a full heart beat. The coupled electromechanical problem is addressed by means of a fully segregated scheme. The numerical discretization is based on the Finite Element Method for the spatial discretization and on Backward Differentiation Formulas for the time discretization. The arising non-linear algebraic system coming from application of the implicit scheme is solved through the Newton method. Numerical simulations are carried out in a patient-specific biventricle geometry to highlight the most relevant results of both electrophysiology and mechanics and to compare them with physiological data and measurements. We show how various fibre configurations and bulk modulus modify relevant clinical quantities such as stroke volume, ejection fraction and ventricle contractility.
Identification of atrial sites that perpetuate atrial fibrillation (AF), and ablation thereof terminates AF, is challenging. We hypothesized that specific electrogram (EGM) characteristics identify AF-termination sites (AFTS). Twenty-one patients in whom low-voltage-guided ablation after pulmonary vein isolation terminated clinical persistent AF were included. Patients were included if short RF-delivery for <8sec at a given atrial site was associated with acute termination of clinical persistent AF. EGM-characteristics at 21 AFTS, 105 targeted sites without termination and 105 non-targeted control sites were analyzed. Alteration of EGM-characteristics by local fibrosis was evaluated in a three-dimensional high resolution (100 µm)-computational AF model. AFTS demonstrated lower EGM-voltage, higher EGM-cycle-length-coverage, shorter AF-cycle-length and higher pattern consistency than control sites (0.49 ± 0.39 mV vs. 0.83 ± 0.76 mV, p < 0.0001; 79 ± 16% vs. 59 ± 22%, p = 0.0022; 173 ± 49 ms vs. 198 ± 34 ms, p = 0.047; 80% vs. 30%, p < 0.01). Among targeted sites, AFTS had higher EGM-cycle-length coverage, shorter local AF-cycle-length and higher pattern consistency than targeted sites without AF-termination (79 ± 16% vs. 63 ± 23%, p = 0.02; 173 ± 49 ms vs. 210 ± 44 ms, p = 0.002; 80% vs. 40%, p = 0.01). Low voltage (0.52 ± 0.3 mV) fractionated EGMs (79 ± 24 ms) with delayed components in sinus rhythm ('atrial late potentials', respectively 'ALP') were observed at 71% of AFTS. EGMs recorded from fibrotic areas in computational models demonstrated comparable EGM-characteristics both in simulated AF and sinus rhythm. AFTS may therefore be identified by locally consistent, fractionated low-voltage EGMs with high cycle-length-coverage and rapid activity in AF, with low-voltage, fractionated EGMs with delayed components/ 'atrial late potentials' (ALP) persisting in sinus rhythm.
A. Naber, D. Berwanger, and W. Nahm. Geodesic Length Measurement in Medical Images: Effect of the Discretization by the Camera Chip and Quantitative Assessment of Error Reduction Methods. In Photonics, vol. 7(3) , pp. 1-16, 2020
Afterinterventionssuchasbypasssurgeriesthevascularfunctionischeckedqualitatively and remotely by observing the blood dynamics inside the vessel via Fluorescence Angiography. This state-of-the-art method has to be improved by introducing a quantitatively measured blood flow. Previous approaches show that the measured blood flow cannot be easily calibrated against a gold standard reference. In order to systematically address the possible sources of error, we investigated the error in geodesic length measurement caused by spatial discretization on the camera chip. We used an in-silico vessel segmentation model based on mathematical functions as a ground truth for the length of vessel-like anatomical structures in the continuous space. Discretization errors for the chosen models were determined in a typical magnitude of 6%. Since this length error would propagate to an unacceptable error in blood flow measurement, counteractions need to be developed. Therefore, different methods for the centerline extraction and spatial interpolation have been tested and compared against their performance in reducing the discretization error in length measurement by re-continualization. In conclusion, the discretization error is reduced by the re-continualization of the centerline to an acceptable range. The discretization error is dependent on the complexity of the centerline and this dependency is also reduced. Thereby the centerline extraction by erosion in combination with the piecewise Bézier curve fitting performs best by reducing the error to 2.7% with an acceptable computational time.
Von wandernden Ionen über das Zellgewebe bis hin zur Aufzeichnung eines EKG: Die Softwaresimulation des menschlichen Herzens ermöglicht maßgeschneiderte Therapien. Am Karlsruher Institut für Technologie (KIT) haben Forscher ein Computermodell des menschlichen Herzens entwickelt. Die Wissenschaftler sind mittlerweile sogar schon so weit, dass sie das Modell auf die individuellen Eigenschaften eines einzelnen Patienten maßschneidern können. Diese Simulation hat einen erheblichen Nutzen für die medizinische Praxis.
Therapeutic hypothermia (TH) is an approved neuroproctetive treatment to reduce neurological morbidity and mortality after hypoxic-ischemic damage related to cardiac arrest and neonatal asphyxia. Also in the treatment of acute ischemic stroke (AIS), which in Western countries still shows a very high mortality rate of about 25 %, selective mild TH by means of Targeted Temperature Management (TTM) could potentially decrease final infarct volume. In this respect, a novel intracarotid blood cooling catheter system has recently been developed, which allows for combined carotid blood cooling and mechanical thrombectomy (MT) and aims at selective mild TH in the affected ischemic brain (core and penumbra). Unfortunately, so far direct measurement and control of cooled cerebral temperature requires invasive or elaborate MRI-assisted measurements. Computational modeling provides unique opportunities to predict the resulting cerebral temperatures on the other hand. In this work, a simplified 3D brain model was generated and coupled with a 1D hemodynamics model to predict spatio-temporal cerebral temperature profiles using finite element modeling. Cerebral blood and tissue temperatures as well as the systemic temperature were analyzed for physiological conditions as well as for a middle cerebral artery (MCA) M1 occlusion. Furthermore, vessel recanalization and its effect on cerebral temperature was analyzed. The results show a significant influence of collateral flow on the cooling effect and are in accordance with experimental data in animals. Our model predicted a possible neuroprotective temperature decrease of 2.5 ℃ for the territory of MCA perfusion after 60 min of blood cooling, which underlines the potential of the new device and the use of TTM in case of AIS.
Background and Purpose: The exact mechanism of spontaneous pacemaking is not fully understood. Recent results suggest tight cooperation between intracellular Ca handling and sarcolemmal ion channels. An important player of this crosstalk is the Na/Ca exchanger (NCX), however, direct pharmacological evidence was unavailable so far because of the lack of a selective inhibitor. We investigated the role of the NCX current in pacemaking and analyzed the functional consequences of the I-NCX coupling by applying the novel selective NCX inhibitor ORM-10962 on the sinus node (SAN). Experimental Approach: Currents were measured by patch-clamp, Ca-transients were monitored by fluorescent optical method in rabbit SAN cells. Action potentials (AP) were recorded from rabbit SAN tissue preparations. Mechanistic computational data were obtained using the Yaniv . SAN model. Key Results: ORM-10962 (ORM) marginally reduced the SAN pacemaking cycle length with a marked increase in the diastolic Ca level as well as the transient amplitude. The bradycardic effect of NCX inhibition was augmented when the funny-current (I) was previously inhibited and , the effect of I was augmented when the Ca handling was suppressed. Conclusion and Implications: We confirmed the contribution of the NCX current to cardiac pacemaking using a novel NCX inhibitor. Our experimental and modeling data support a close cooperation between I and NCX providing an important functional consequence: these currents together establish a strong depolarization capacity providing important safety factor for stable pacemaking. Thus, after individual inhibition of I or NCX, excessive bradycardia or instability cannot be expected because each of these currents may compensate for the reduction of the other providing safe and rhythmic SAN pacemaking.
O. Dössel. Elektrophysiologie der Atrien - Computermodelle liefern Antworten. 2020
OBJECTIVE: Unipolar intracardiac electrograms (uEGMs) measured inside the atria during electro-anatomic mapping contain diagnostic information about cardiac excitation and tissue properties. The ventricular far field (VFF) caused by ventricular depolarization compromises these signals. Current signal processing techniques require several seconds of local uEGMs to remove the VFF component and thus prolong the clinical mapping procedure. We developed an approach to remove the VFF component using data obtained during initial anatomy acquisition. METHODS: We developed two models which can approximate the spatio-temporal distribution of the VFF component based on acquired EGM data: Polynomial fit, and dipole fit. Both were benchmarked based on simulated cardiac excitation in two models of the human heart and applied to clinical data. RESULTS: VFF data acquired in one atrium were used to estimate model parameters. Under realistic noise conditions, a dipole model approximated the VFF with a median deviation of 0.029mV, yielding a median VFF attenuation of 142. In a different setup, only VFF data acquired at distances of more than 5mm to the atrial endocardium were used to estimate the model parameters. The VFF component was then extrapolated for a layer of 5mm thickness lining the endocardial tissue. A median deviation of 0.082mV (median VFF attenuation of 49x) was achieved under realistic noise conditions. CONCLUSION: It is feasible to model the VFF component in a personalized way and effectively remove it from uEGMs. SIGNIFICANCE: Application of our novel, simple and computationally inexpensive methods allows immediate diagnostic assessment of uEGM data without prolonging data acquisition.
End-stage chronic kidney disease (CKD) patients are facing a 30% rise for the risk of lethal cardiac events (LCE) compared to non-CKD patients. At the same time, these patients undergoing dialysis experience shifts in the potassium concentrations. The increased risk of LCE paired with the concentration changes suggest a connection between LCE and concentration disbalances. To prove this link, a continuous monitoring device for the ionic concentrations, e.g. the ECG, is needed. In this work, we want to answer if an optimised signal processing chain can improve the result quantify the influence of a disbalanced training dataset on the final estimation result. The study was performed on a dataset consisting of 12-lead ECGs recorded during dialysis sessions of 32 patients. We selected three features to find a mapping from ECG features to [K+]o: T-wave ascending slope, T-wave descending slope and T-wave amplitude. A polynomial model of 3rd order was used to reconstruct the concentrations from these features. We solved a regularised weighted least squares problem with a weighting matrix dependent on the frequency of each concentration in the dataset (frequent concentration weighted less). By doing so, we tried to generate a model being suitable for the whole range of the concentrations.With weighting, errors are increasing for the whole dataset. For the data partition with [K+]o<5 mmol/l, errors are increasing, for [K+]o≥5 mmol/l, errors are decreasing. However, and apart from the exact reconstruction results, we can conclude that a model being valid for all patients and not only the majority, needs to be learned with a more homogeneous dataset. This can be achieved by leaving out data points or by weighting the errors during the model fitting. With increasing weighting, we increase the performance on the part of the [K+]o that are less frequent which was desired in our case.
C. Nagel, N. Pilia, A. Loewe, and O. Dössel. Quantification of Interpatient 12-lead ECG Variabilities within a Healthy Cohort. In Current Directions in Biomedical Engineering, vol. 6(3) , pp. 493-496, 2020
The morphology of the electrocardiogram (ECG) varies among different healthy subjects due to anatomical and structural reasons, such as for example the shape of the heart geometry or the position and size of surrounding organs in the torso. Knowledge about these ECG morphology changes could be used to parameterize electrophysiological simula- tions of the human heart. In this work, we detected the boundaries of ECG waveforms, i.e. the P-wave, the QRS-complex and the T-wave, in 12- lead ECGs from 918 healthy subjects in the Physionet Com- puting in Cardiology Challenge 2020 Database with the IBT openECG toolbox. Subsequently, we obtained the onset, the peak and the offset of each P-wave, QRS-complex and T-wave in the signal. In this way, the duration of the P-wave, the QRS- complex and the T-wave, the PQ-, RR- and the QT-interval as well as the amplitudes of the P-wave, the Q-, R- and S- peak and the T-wave in each lead were extracted from the 918 healthy ECGs. Their statistical distributions and correlation between each other were assessed. The highest variabilities among the 918 healthy subject were found for the RR interval and the amplitudes of the QRS- complex. The highest correlation was observed for feature pairs that represent the same feature in different leads. Es- pecially the R-peak amplitudes showed a strong correlation across different leads. The calculated feature distributions can be used to optimize the parameters of populations of cardiac electrophysiological models. In this way, realistic in-silico generated surface ECGs can be simulated in large scale and could be used as input data for machine learning algorithms for a classification of cardio- vascular diseases.
Lernende Systeme oder Machine Learning, so sind sich Fachleute einig, werden auch in der Medizin und der Medizintechnik zukünftig eine große Bedeutung erlangen – mit Vorteilen aber auch mit Risiken für Patientinnen und Patienten, Unternehmen und Fachpersonal. Dabei ergeben sich verschiedenste Herausforderungen im Umgang mit Machine-Learning-Systemen – unter anderem für praktische Behandlungssituationen, für die Qualitätskontrolle, für die Sicherheit in Notfallsituationen oder die Bewertung der vom Computer vorgeschlagenen Diagnosen und Therapiepfade. Die vorliegende acatech POSITION ist das Ergebnis einer Arbeitsgruppe von Wissenschaftlerinnen und Wissenschaftlern aus Medizin und Technik. Die Projektgruppe gibt einen Überblick über heutige Anwendungen von Machine Learning in der Medizintechnik und beleuchtet wichtige zukünftige Anwendungsfelder. Im Fokus stehen darüber hinaus ethische, rechtliche und regulatorische Aspekte sowie kritische Fragen zum Datenschutz und mögliche Veränderungen im Arzt-Patienten-Verhältnis. Neben Vorschlägen zum Aufbau großer medizinischer Datenbanken gibt diese Position auch Handlungsempfehlungen für Ärztinnen und Ärzte, Einrichtungen der Forschungsförderung und die Politik.
Book Chapters (2)
O. Dössel. Stand und Zukunft für Apps für die Gesundheit. In Denkanstöße aus der Akademie, Berlin-Brandenburgischen Akademie der Wissenschaften, pp. 25-31, 2021
O. Dössel. Kl und Verantwortung bei technischen Systemen. In Verantwortungsvoller Einsatz von KI? Mit menschlicher Kompetenz, Berlin-Brandenburgische Akademie der Wissenschaften, pp. 23-27, 2021