Ventricular tachycardia (VT) can be one cause of sudden cardiac death affecting 4.25 million persons per year worldwide. A curative treatment is catheter ablation in order to inactivate the abnormally triggering regions. To facilitate and expedite the localization during the ablation procedure, we present two novel localization techniques based on convolutional neural networks (CNNs). In contrast to existing methods, e.g. using ECG imaging, our approaches were designed to be independent of the patient-specific geometries and directly applicable to surface ECG signals, while also delivering a binary transmural position. One method outputs ranked alternative solutions. Results can be visualized either on a generic or patient geometry. The CNNs were trained on a data set containing only simulated data and evaluated both on simulated and clinical test data. On simulated data, the median test error was below 3mm. The median localization error on the clinical data was as low as 32mm. The transmural position was correctly detected in up to 82% of all clinical cases. Using the ranked alternative solutions, the top-3 median error dropped to 20mm on clinical data. These results demonstrate a proof of principle to utilize CNNs to localize the activation source without the intrinsic need of patient-specific geometrical information. Furthermore, delivering multiple solutions can help the physician to find the real activation source amongst more than one possible locations. With further optimization, these methods have a high potential to speed up clinical interventions. Consequently they could decrease procedural risk and improve VT patients' outcomes.
L. Guo, and W. Nahm. A cGAN-based network for depth estimation from bronchoscopic images. In International Journal of Computer Assisted Radiology and Surgery, 2023
Purpose Depth estimation is the basis of 3D reconstruction of airway structure from 2D bronchoscopic scenes, which can be further used to develop a vision-based bronchoscopic navigation system. This work aims to improve the performance of depth estimation directly from bronchoscopic images by training a depth estimation network on both synthetic and real datasets. Methods We propose a cGAN-based network Bronchoscopic-Depth-GAN (BronchoDep-GAN) to estimate depth from bron- choscopic images by translating bronchoscopic images into depth maps. The network is trained in a supervised way learning from synthetic textured bronchoscopic image-depth pairs and virtual bronchoscopic image-depth pairs, and simultaneously, also in an unsupervised way learning from unpaired real bronchoscopic images and depth maps to adapt the model to real bronchoscopic scenes.Results Our method is tested on both synthetic data and real data. However, the tests on real data are only qualitative, as no ground truth is available. The results show that our network obtains better accuracy in all cases in estimating depth from bronchoscopic images compared to the well-known cGANs pix2pix.Conclusions Including virtual and real bronchoscopic images in the training phase of the depth estimation networks can improve depth estimation’s performance on both synthetic and real scenes. Further validation of this work is planned on 3D clinical phantoms. Based on the depth estimation results obtained in this work, the accuracy of locating bronchoscopes with corresponding pre-operative CTs will also be evaluated in comparison with the current clinical status.
Purpose: To evaluate the impact of lens opacity on the reliability of optical coherence tomog- raphy angiography metrics and to find a vessel caliber threshold that is reproducible in cataract patients.Methods: A prospective cohort study of 31 patients, examining one eye per patient, by applying 33mm macular optical coherence tomography angiography before (18.94±12.22days) and 3 months (111 ± 23.45 days) after uncomplicated cataract surgery. We extracted superficial (SVC) and deep vascular plexuses (DVC) for further analysis and evaluated changes in image contrast, vessel metrics (perfusion density, flow deficit and vessel-diameter index) and foveal avascular area (FAZ). Results: After surgery, the blood flow signal in smaller capillaries was enhanced as image contrast improved. Signal strength correlated to average lens density defined by objective measurement in Scheimpflug images (Pearson’s r: –.40, p: .027) and to flow deficit (r1⁄4 –.70, p<.001). Perfusion density correlated to the signal strength index (r1⁄4.70, p<.001). Vessel metrics and FAZ area, except for FAZ area in DVC, were significantly different after cataract surgery, but the mean change was approximately 3–6%. A stepwise approach in extracting vessels according to their pixel caliber showed a threshold of > 6 pixels caliber ($20–30 mm) was comparable before and after lens removal.Conclusion: In patients with cataract, OCTA vessel metrics should be interpreted with caution. In addition to signal strength, contrast and pixel properties can serve as supplementary quality met- rics to improve the interpretation of OCTA metrics. Vessels with $20–30 mm in caliber seem to be reproducible.
INTRODUCTION: Improved sinus rhythm (SR) maintenance rates have been achieved in patients with persistent atrial fibrillation (AF) undergoing pulmonary vein isolation plus additional ablation of low voltage substrate (LVS) during SR. However, voltage mapping during SR may be hindered in persistent and long-persistent AF patients by immediate AF recurrence after electrical cardioversion. We assess correlations between LVS extent and location during SR and AF, aiming to identify regional voltage thresholds for rhythm-independent delineation/detection of LVS areas. (1) Identification of voltage dissimilarities between mapping in SR and AF. (2) Identification of regional voltage thresholds that improve cross-rhythm substrate detection. (3) Comparison of LVS between SR and native versus induced AF. METHODS: Forty-one ablation-naive persistent AF patients underwent high-definition (1 mm electrodes; >1200 left atrial (LA) mapping sites per rhythm) voltage mapping in SR and AF. Global and regional voltage thresholds in AF were identified which best match LVS < 0.5 mV and <1.0 mV in SR. Additionally, the correlation between SR-LVS with induced versus native AF-LVS was assessed. RESULTS: Substantial voltage differences (median: 0.52, interquartile range: 0.33-0.69, maximum: 1.19 mV) with a predominance of the posterior/inferior LA wall exist between the rhythms. An AF threshold of 0.34 mV for the entire left atrium provides an accuracy, sensitivity and specificity of 69%, 67%, and 69% to identify SR-LVS < 0.5 mV, respectively. Lower thresholds for the posterior wall (0.27 mV) and inferior wall (0.3 mV) result in higher spatial concordance to SR-LVS (4% and 7% increase). Concordance with SR-LVS was higher for induced AF compared to native AF (area under the curve[AUC]: 0.80 vs. 0.73). AF-LVS < 0.5 mV corresponds to SR-LVS < 0.97 mV (AUC: 0.73). CONCLUSION: Although the proposed region-specific voltage thresholds during AF improve the consistency of LVS identification as determined during SR, the concordance in LVS between SR and AF remains moderate, with larger LVS detection during AF. Voltage-based substrate ablation should preferentially be performed during SR to limit the amount of ablated atrial myocardium.
Machine learning (ML) methods for the analysis of electrocardiography (ECG) data are gaining importance, substantially supported by the release of large public datasets. However, these current datasets miss important derived descriptors such as ECG features that have been devised in the past hundred years and still form the basis of most automatic ECG analysis algorithms and are critical for cardiologists' decision processes. ECG features are available from sophisticated commercial software but are not accessible to the general public. To alleviate this issue, we add ECG features from two leading commercial algorithms and an open-source implementation supplemented by a set of automatic diagnostic statements from a commercial ECG analysis software in preprocessed format. This allows the comparison of ML models trained on clinically versus automatically generated label sets. We provide an extensive technical validation of features and diagnostic statements for ML applications. We believe this release crucially enhances the usability of the PTB-XL dataset as a reference dataset for ML methods in the context of ECG data.
L. Guo, and W. Nahm. Texture synthesis for generating realistic-looking bronchoscopic videos. In International Journal of Computer Assisted Radiology and Surgery, 2023
Purpose Synthetic realistic-looking bronchoscopic videos are needed to develop and evaluate depth estimation methods as part of investigating vision-based bronchoscopic navigation system. To generate these synthetic videos under the circumstance where access to real bronchoscopic images/image sequences is limited, we need to create various realistic-looking image textures of the airway inner surface with large size using a small number of real bronchoscopic image texture patches. Methods A generative adversarial networks-based method is applied to create realistic-looking textures of the airway inner surface by learning from a limited number of small texture patches from real bronchoscopic images. By applying a purely convolutional architecture without any fully connected layers, this method allows the production of textures with arbitrary size.Results Authentic image textures of airway inner surface are created. An example of the synthesized textures and two frames of the thereby generated bronchoscopic video are shown. The necessity and sufficiency of the generated textures as image features for further depth estimation methods are demonstrated.Conclusions The method can generate textures of the airway inner surface that meet the requirements for the texture itself and for the thereby generated bronchoscopic videos, including “realistic-looking,” “long-term temporal consistency,” “sufficient image features for depth estimation,” and “large size and variety of synthesized textures.” Besides, it also shows advantages with respect to the easy accessibility to required data source. A further validation of this approach is planned by utilizing the realistic-looking bronchoscopic videos with textures generated by this method as training and test data for some depth estimation networks.
Clonogenic assays are routinely used to evaluate the response of cancer cells to external radiation fields, assess their radioresistance and radiosensitivity, estimate the performance of radiotherapy. However, classic clonogenic tests focus on the number of colonies forming on a substrate upon exposure to ionizing radiation, and disregard other important characteristics of cells such their ability to generate structures with a certain shape. The radioresistance and radiosensitivity of cancer cells may depend less on the number of cells in a colony and more on the way cells interact to form complex networks. In this study, we have examined whether the topology of 2D cancer-cell graphs is influenced by ionizing radiation. We subjected different cancer cell lines, i.e. H4 epithelial neuroglioma cells, H460 lung cancer cells, PC3 bone metastasis of grade IV of prostate cancer and T24 urinary bladder cancer cells, cultured on planar surfaces, to increasing photon radiation levels up to 6 Gy. Fluorescence images of samples were then processed to determine the topological parameters of the cell-graphs developing over time. We found that the larger the dose, the less uniform the distribution of cells on the substrate—evidenced by high values of small-world coefficient (cc), high values of clustering coefficient (cc), and small values of characteristic path length (cpl). For all considered cell lines, 𝑠𝑤>1 for doses higher or equal to 4 Gy, while the sensitivity to the dose varied for different cell lines: T24 cells seem more distinctly affected by the radiation, followed by the H4, H460 and PC3 cells. Results of the work reinforce the view that the characteristics of cancer cells and their response to radiotherapy can be determined by examining their collective behavior—encoded in a few topological parameters—as an alternative to classical clonogenic assays.
Purpose Primary central nervous system lymphoma (PCNSL) is a rare, aggressive form of extranodal non-Hodgkin lym- phoma. To predict the overall survival (OS) in advance is of utmost importance as it has the potential to aid clinical decision-making. Though radiomics-based machine learning (ML) has demonstrated the promising performance in PCNSL, it demands large amounts of manual feature extraction efforts from magnetic resonance images beforehand. deep learning (DL) overcomes this limitation.Methods In this paper, we tailored the 3D ResNet to predict the OS of patients with PCNSL. To overcome the limitation of data sparsity, we introduced data augmentation and transfer learning, and we evaluated the results using r stratified k-fold cross-validation. To explain the results of our model, gradient-weighted class activation mapping was applied.Results We obtained the best performance (the standard error) on post-contrast T1-weighted (T1Gd)—area under curve = 0.81(0.03), accuracy = 0.87(0.07), precision = 0.88(0.07), recall = 0.88(0.07) and F1-score = 0.87(0.07), while compared with ML-based models on clinical data and radiomics data, respectively, further confirming the stability of our model. Also, we observed that PCNSL is a whole-brain disease and in the cases where the OS is less than 1 year, it is more difficult to distinguish the tumor boundary from the normal part of the brain, which is consistent with the clinical outcome. Conclusions All these findings indicate that T1Gd can improve prognosis predictions of patients with PCNSL. To the best of our knowledge, this is the first time to use DL to explain model patterns in OS classification of patients with PCNSL. Future work would involve collecting more data of patients with PCNSL, or additional retrospective studies on different patient populations with rare diseases, to further promote the clinical role of our model.
T. Gerach, S. Schuler, A. Wachter, and A. Loewe. The Impact of Standard Ablation Strategies for Atrial Fibrillation on Cardiovascular Performance in a Four-Chamber Heart Model. In Cardiovascular Engineering and Technology, vol. 14(2) , pp. 296-314, 2023
PURPOSE: Atrial fibrillation is one of the most frequent cardiac arrhythmias in the industrialized world and ablation therapy is the method of choice for many patients. However, ablation scars alter the electrophysiological activation and the mechanical behavior of the affected atria. Different ablation strategies with the aim to terminate atrial fibrillation and prevent its recurrence exist but their impact on the performance of the heart is often neglected. METHODS: In this work, we present a simulation study analyzing five commonly used ablation scar patterns and their combinations in the left atrium regarding their impact on the pumping function of the heart using an electromechanical whole-heart model. We analyzed how the altered atrial activation and increased stiffness due to the ablation scars affect atrial as well as ventricular contraction and relaxation. RESULTS: We found that systolic and diastolic function of the left atrium is impaired by ablation scars and that the reduction of atrial stroke volume of up to 11.43% depends linearly on the amount of inactivated tissue. Consequently, the end-diastolic volume of the left ventricle, and thus stroke volume, was reduced by up to 1.4 and 1.8%, respectively. During ventricular systole, left atrial pressure was increased by up to 20% due to changes in the atrial activation sequence and the stiffening of scar tissue. CONCLUSION: This study provides biomechanical evidence that atrial ablation has acute effects not only on atrial contraction but also on ventricular performance. Therefore, the position and extent of ablation scars is not only important for the termination of arrhythmias but is also determining long-term pumping efficiency. If confirmed in larger cohorts, these results have the potential to help tailoring ablation strategies towards minimal global cardiovascular impairment.
Background and Objective: Planning the optimal ablation strategy for the treatment of complex atrial tachycardia (CAT) is a time consuming task and is error-prone. Recently, directed network mapping, a technology based on graph theory, proved to efficiently identify CAT based solely on data of clinical interventions. Briefly, a directed network was used to model the atrial electrical propagation and reentrant activities were identified by looking for closed-loop paths in the network. In this study, we propose a recommender system, built as an optimization problem, able to suggest the optimal ablation strategy for the treatment of CAT.Methods: The optimization problem modeled the optimal ablation strategy as that one interrupting all reentrant mechanisms while minimizing the ablated atrial surface. The problem was designed on top of directed network mapping. Considering the exponential complexity of finding the optimal solution of the problem, we introduced a heuristic algorithm with polynomial complexity. The proposed algorithm was applied to the data of i) 6 simulated scenarios including both left and right atrial flutter; and ii) 10 subjects that underwent a clinical routine.Results: The recommender system suggested the optimal strategy in 4 out of 6 simulated scenarios. On clinical data, the recommended ablation lines were found satisfactory on 67% of the cases according to the clinician’s opinion, while they were correctly located in 89%. The algorithm made use of only data collected during mapping and was able to process them nearly real-time.Conclusions: The first recommender system for the identification of the optimal ablation lines for CAT, based solely on the data collected during the intervention, is presented. The study may open up interesting scenarios for the application of graph theory for the treatment of CAT.
Primary Central Nervous System Lymphoma (PCNSL) is an aggressive neoplasm with a poor prognosis. Although therapeutic progresses have significantly improved Overall Survival (OS), a number of patients do not respond to HD–MTX-based chemotherapy (15–25%) or experience relapse (25–50%) after an initial response. The reasons underlying this poor response to therapy are unknown. Thus, there is an urgent need to develop improved predictive models for PCNSL. In this study, we investigated whether radiomics features can improve outcome prediction in patients with PCNSL. A total of 80 patients diagnosed with PCNSL were enrolled. A patient sub-group, with complete Magnetic Resonance Imaging (MRI) series, were selected for the stratification analysis. Following radiomics feature extraction and selection, different Machine Learning (ML) models were tested for OS and Progression-free Survival (PFS) prediction. To assess the stability of the selected features, images from 23 patients scanned at three different time points were used to compute the Interclass Correlation Coefficient (ICC) and to evaluate the reproducibility of each feature for both original and normalized images. Features extracted from Z-score normalized images were significantly more stable than those extracted from non-normalized images with an improvement of about 38% on average (p-value < 10−12). The area under the ROC curve (AUC) showed that radiomics-based prediction overcame prediction based on current clinical prognostic factors with an improvement of 23% for OS and 50% for PFS, respectively. These results indicate that radiomics features extracted from normalized MR images can improve prognosis stratification of PCNSL patients and pave the way for further study on its potential role to drive treatment choice.
AIMS: The long-term success rate of ablation therapy is still sub-optimal in patients with persistent atrial fibrillation (AF), mostly due to arrhythmia recurrence originating from arrhythmogenic sites outside the pulmonary veins. Computational modelling provides a framework to integrate and augment clinical data, potentially enabling the patient-specific identification of AF mechanisms and of the optimal ablation sites. We developed a technology to tailor ablations in anatomical and functional digital atrial twins of patients with persistent AF aiming to identify the most successful ablation strategy. METHODS AND RESULTS: Twenty-nine patient-specific computational models integrating clinical information from tomographic imaging and electro-anatomical activation time and voltage maps were generated. Areas sustaining AF were identified by a personalized induction protocol at multiple locations. State-of-the-art anatomical and substrate ablation strategies were compared with our proposed Personalized Ablation Lines (PersonAL) plan, which consists of iteratively targeting emergent high dominant frequency (HDF) regions, to identify the optimal ablation strategy. Localized ablations were connected to the closest non-conductive barrier to prevent recurrence of AF or atrial tachycardia. The first application of the HDF strategy had a success of >98% and isolated only 5-6% of the left atrial myocardium. In contrast, conventional ablation strategies targeting anatomical or structural substrate resulted in isolation of up to 20% of left atrial myocardium. After a second iteration of the HDF strategy, no further arrhythmia episode could be induced in any of the patient-specific models. CONCLUSION: The novel PersonAL in silico technology allows to unveil all AF-perpetuating areas and personalize ablation by leveraging atrial digital twins.
A. Loewe, A. Luik, R. Sassi, and P. Laguna. Together we are strong! Collaboration between clinicians and engineers as an enabler for better diagnosis and therapy of atrial arrhythmias.. In Medical & Biological Engineering & Computing, 2023
The bidomain model and the finite element method are an established standard to mathematically describe cardiac electrophysiology, but are both suboptimal choices for fast and large-scale simulations due to high computational costs. We investigate to what extent simplified approaches for propagation models (monodomain, reaction-Eikonal and Eikonal) and forward calculation (boundary element and infinite volume conductor) deliver markedly accelerated, yet physiologically accurate simulation results in atrial electrophysiology. <i>Methods:</i> We compared action potential durations, local activation times (LATs), and electrocardiograms (ECGs) for sinus rhythm simulations on healthy and fibrotically infiltrated atrial models. <i>Results:</i> All simplified model solutions yielded LATs and P waves in accurate accordance with the bidomain results. Only for the Eikonal model with pre-computed action potential templates shifted in time to derive transmembrane voltages, repolarization behavior notably deviated from the bidomain results. ECGs calculated with the boundary element method were characterized by correlation coefficients <inline-formula><tex-math notation="LaTeX">$>$</tex-math></inline-formula>0.9 compared to the finite element method. The infinite volume conductor method led to lower correlation coefficients caused predominantly by systematic overestimations of P wave amplitudes in the precordial leads. <i>Conclusion:</i> Our results demonstrate that the Eikonal model yields accurate LATs and combined with the boundary element method precise ECGs compared to markedly more expensive full bidomain simulations. However, for an accurate representation of atrial repolarization dynamics, diffusion terms must be accounted for in simplified models. <i>Significance:</i> Simulations of atrial LATs and ECGs can be notably accelerated to clinically feasible time frames at high accuracy by resorting to the Eikonal and boundary element methods.
Background: Progressive atrial fibrotic remodeling has been reported to be associated with atrial cardiomyopathy (ACM) and the transition from paroxysmal to persistent atrial fibrillation (AF). We sought to identify the anatomical/structural and electrophysiological factors involved in atrial remodeling that promote AF persistency.Methods: Consecutive patients with paroxysmal (n = 134) or persistent (n = 136) AF who presented for their first AF ablation procedure were included. Patients underwent left atrial (LA) high-definition mapping (1,835 ± 421 sites/map) during sinus rhythm (SR) and were randomized to training and validation sets for model development and evaluation. A total of 62 parameters from both electro-anatomical mapping and non-invasive baseline data were extracted encompassing four main categories: (1) LA size, (2) extent of low-voltage-substrate (LVS), (3) LA voltages and (4) bi-atrial conduction time as identified by the duration of amplified P-wave (APWD) in a digital 12-lead-ECG. Least absolute shrinkage and selection operator (LASSO) and logistic regression were performed to identify the factors that are most relevant to AF persistency in each category alone and all categories combined. The performance of the developed models for diagnosis of AF persistency was validated regarding discrimination, calibration and clinical usefulness. In addition, HATCH score and C2HEST score were also evaluated for their performance in identification of AF persistency.Results: In training and validation sets, APWD (threshold 151 ms), LA volume (LAV, threshold 94 mL), bipolar LVS area < 1.0 mV (threshold 4.55 cm2) and LA global mean voltage (GMV, threshold 1.66 mV) were identified as best determinants for AF persistency in the respective category. Moreover, APWD (AUC 0.851 and 0.801) and LA volume (AUC 0.788 and 0.741) achieved better discrimination between AF types than LVS extent (AUC 0.783 and 0.682) and GMV (AUC 0.751 and 0.707). The integrated model (combining APWD and LAV) yielded the best discrimination performance between AF types (AUC 0.876 in training set and 0.830 in validation set). In contrast, HATCH score and C2HEST score only achieved AUC < 0.60 in identifying individuals with persistent AF in current study.Conclusion: Among 62 electro-anatomical parameters, we identified APWD, LA volume, LVS extent, and mean LA voltage as the four determinant electrophysiological and structural factors that are most relevant for AF persistency. Notably, the combination of APWD with LA volume enabled discrimination between paroxysmal and persistent AF with high accuracy, emphasizing their importance as underlying substrate of persistent AF.
The KCNQ1 gene encodes the α-subunit of the cardiac voltage-gated potassium (Kv) channel KCNQ1, also denoted as Kv7.1 or KvLQT1. The channel assembles with the ß-subunit KCNE1, also known as minK, to generate the slowly activating cardiac delayed rectifier current IKs, a key regulator of the heart rate dependent adaptation of the cardiac action potential duration (APD). Loss-of-function variants in KCNQ1 cause the congenital Long QT1 (LQT1) syndrome, characterized by delayed cardiac repolarization and a QT interval prolongation in the surface electrocardiogram (ECG). Autosomal dominant loss-of-function variants in KCNQ1 result in the LQT syndrome called Romano-Ward syndrome (RWS), while autosomal recessive variants affecting function, lead to Jervell and Lange-Nielsen syndrome (JLNS), associated with deafness. The aim of this study was the characterization of novel KCNQ1 variants identified in patients with RWS to widen the spectrum of known LQT1 variants, and improve the interpretation of the clinical relevance of variants in the KCNQ1 gene. We functionally characterized nine human KCNQ1 variants using the voltage-clamp technique in Xenopus laevis oocytes, from which we report seven novel variants. The functional data was taken as input to model surface ECGs, to subsequently compare the functional changes with the clinically observed QTc times, allowing a further interpretation of the severity of the different LQTS variants. We found that the electrophysiological properties of the variants correlate with the severity of the clinically diagnosed phenotype in most cases, however, not in all. Electrophysiological studies combined with in silico modelling approaches are valuable components for the interpretation of the pathogenicity of KCNQ1 variants, but assessing the clinical severity demands the consideration of other factors that are included, for example in the Schwartz score.
Life-threatening cardiac arrhythmias require immediate defibrillation. For state-of-the-art shock treatments, a high field strength is required to achieve a sufficient success rate for terminating the complex spiral wave (rotor) dynamics underlying cardiac fibrillation. However, such high energy shocks have many adverse side effects due to the large electric currents applied. In this study, we show, using 2D simulations based on the Fenton-Karma model, that also pulses of relatively low energy may terminate the chaotic activity if applied at the right moment in time. In our simplified model for defibrillation, complex spiral waves are terminated by local perturbations corresponding to conductance heterogeneities acting as virtual electrodes in the presence of an external electric field. We demonstrate that time series of the success rate for low energy shocks exhibit pronounced peaks which correspond to short intervals in time during which perturbations aiming at terminating the chaotic fibrillation state are (much) more successful. Thus, the low energy shock regime, although yielding very low temporal average success rates, exhibits moments in time for which success rates are significantly higher than the average value shown in dose-response curves. This feature might be exploited in future defibrillation protocols for achieving high termination success rates with low or medium pulse energies.
Objective: Diagnosis of craniosynostosis using photogrammetric 3D surface scans is a promising radiation-free alternative to traditional computed tomography. We propose a 3D surface scan to 2D distance map conversion enabling the usage of the first convolutional neural networks (CNNs)-based classification of craniosynostosis. Benefits of using 2D images include preserving patient anonymity, enabling data augmentation during training, and a strong under-sampling of the 3D surface with good classification performance.Methods: The proposed distance maps sample 2D images from 3D surface scans using a coordinate transformation, ray casting, and distance extraction. We introduce a CNNbased classification pipeline and compare our classifier to alternative approaches on a dataset of 496 patients. We investigate into low-resolution sampling, data augmentation, and attribution mapping.Results: Resnet18 outperformed alternative classifiers on our dataset with an F1-score of 0.964 and an accuracy of 98.4 %. Data augmentation on 2D distance maps increased performance for all classifiers. Under-sampling allowed 256-fold computation reduction during ray casting while retaining an F1-score of 0.92. Attribution maps showed high amplitudes on the frontal head.Conclusion: We demonstrated a versatile mapping approach to extract a 2D distance map from the 3D head geometry increasing classification performance, enabling data augmentation during training on 2D distance maps, and the usage of CNNs. We found that low-resolution images were sufficient for a good classification performance.Significance: Photogrammetric surface scans are a suitable craniosynostosis diagnosis tool for clinical practice. Domain transfer to computed tomography seems likely and can further contribute to reducing ionizing radiation exposure for infants.
L. Scherer, M. Kuss, and W. Nahm. Review of Artificial Intelligence–Based Signal Processing in Dialysis: Challenges for Machine-Embedded and Complementary Applications. In Adv in Kidney Disease and Health, vol. 1(30) , pp. 40-46, 2023
Artificial intelligence technology is trending in nearly every medical area. It offers the possibility for improving analytics, therapy outcome, and user experience during therapy. In dialysis, the application of artificial intelligence as a therapy-individualization tool is led more by start-ups than consolidated players, and innovation in dialysis seems comparably stagnant. Factors such as technical requirements or regulatory processes are important and necessary but can slow down the implementation of artificial intelligence due to missing data infrastructure and undefined approval processes. Current research focuses mainly on analyzing health records or wearable technology to add to existing health data. It barely uses signal data from treatment devices to apply artificial intelligence models. This article, therefore, discusses requirements for signal processing through artificial intelligence in health care and compares these with the status quo in dialysis therapy. It offers solutions for given barriers to speed up innovation with sensor data, opening access to existing and untapped sources, and shows the unique advantage of signal processing in dialysis compared to other health care domains. This research shows that even though the combination of different data is vital for improving patients’ therapy, adding signal-based treatment data from dialysis devices to the picture can benefit the understanding of treatment dynamics, improving and individualizing therapy.
BACKGROUND: Electrical impedance measurements have become an accepted tool for monitoring intracardiac radio frequency ablation. Recently, the long-established generator impedance was joined by novel local impedance measurement capabilities with all electrical circuit terminals being accommodated within the catheter. OBJECTIVE: This work aims at in silico quantification of distinct influencing factors that have remained challenges due to the lack of ground truth knowledge and the superposition of effects in clinical settings. METHODS: We introduced a highly detailed in silico model of two local impedance enabled catheters, namely IntellaNav MiFi™ OI and IntellaNav Stablepoint™, embedded in a series of clinically relevant environments. Assigning material and frequency specific conductivities and subsequently calculating the spread of the electrical field with the finite element method yielded in silico local impedances. The in silico model was validated by comparison to in vitro measurements of standardized sodium chloride solutions. We then investigated the effect of the withdrawal of the catheter into the transseptal sheath, catheter-tissue interaction, insertion of the catheter into pulmonary veins, and catheter irrigation. RESULTS: All simulated setups were in line with in vitro experiments and in human measurements and gave detailed insight into determinants of local impedance changes as well as the relation between values measured with two different devices. CONCLUSION: The in silico environment proved to be capable of resembling clinical scenarios and quantifying local impedance changes. SIGNIFICANCE: The tool can assists the interpretation of measurements in humans and has the potential to support future catheter development.
Approximating the fast dynamics of depolarization waves in the human heart described by the monodomain model is numerically challenging. Splitting methods for the PDE-ODE coupling enable the computation with very fine space and time discretizations. Here, we compare different splitting approaches regarding convergence, accuracy, and efficiency. Simulations were performed for a benchmark problem with the Beeler-Reuter cell model on a truncated ellipsoid approximating the left ventricle including a localized stimulation. For this configuration, we provide a reference solution for the transmembrane potential. We found a semi-implicit approach with state variable interpolation to be the most efficient scheme. The results are transferred to a more physiological setup using a bi-ventricular domain with a complex external stimulation pattern to evaluate the accuracy of the activation time for different resolutions in space and time.
Sinus node (SN) pacemaking is based on a coupling between surface membrane ion-channels and intracellular Ca2+-handling. The fundamental role of the inward Na+/Ca2+ exchanger (NCX) is firmly established. However, little is known about the reverse mode exchange. A simulation study attributed important role to reverse NCX activity, however experimental evidence is still missing. Whole-cell and perforated patch-clamp experiments were performed on rabbit SN cells supplemented with fluorescent Ca2+-tracking. We established 2 and 8 mM pipette NaCl groups to suppress and enable reverse NCX. NCX was assessed by specific block with 1 μM ORM-10962. Mechanistic simulations were performed by Maltsev–Lakatta minimal computational SN model. Active reverse NCX resulted in larger Ca2+-transient amplitude with larger SR Ca2+-content. Spontaneous action potential (AP) frequency increased with 8 mM NaCl. When reverse NCX was facilitated by 1 μM strophantin the Ca2+i and spontaneous rate increased. ORM-10962 applied prior to strophantin prevented Ca2+i and AP cycle change. Computational simulations indicated gradually increasing reverse NCX current, Ca2+i and heart rate with increasing Na+i. Our results provide further evidence for the role of reverse NCX in SN pacemaking. The reverse NCX activity may provide additional Ca2+-influx that could increase SR Ca2+-content, which consequently leads to enhanced pacemaking activity.
Mechanistic cardiac electrophysiology models allow for personalized simulations of the electrical activity in the heart and the ensuing electrocardiogram (ECG) on the body surface. As such, synthetic signals possess known ground truth labels of the underlying disease and can be employed for validation of machine learning ECG analysis tools in addition to clinical signals. Recently, synthetic ECGs were used to enrich sparse clinical data or even replace them completely during training leading to improved performance on real-world clinical test data. We thus generated a novel synthetic database comprising a total of 16,900 12 lead ECGs based on electrophysiological simulations equally distributed into healthy control and 7 pathology classes. The pathological case of myocardial infraction had 6 sub-classes. A comparison of extracted features between the virtual cohort and a publicly available clinical ECG database demonstrated that the synthetic signals represent clinical ECGs for healthy and pathological subpopulations with high fidelity. The ECG database is split into training, validation, and test folds for development and objective assessment of novel machine learning algorithms.
A. Amsaleg, J. Sánchez, R. Mikut, and A. Loewe. Characterization of the pace-and-drive capacity of the human sinoatrial node: A 3D in silico study.. In Biophysical journal, vol. 121(22) , pp. 4247-4259, 2022
The sinoatrial node (SAN) is a complex structure that spontaneously depolarizes rhythmically ("pacing") and excites the surrounding non-automatic cardiac cells ("drive") to initiate each heart beat. However, the mechanisms by which the SAN cells can activate the large and hyperpolarized surrounding cardiac tissue are incompletely understood. Experimental studies demonstrated the presence of an insulating border that separates the SAN from the hyperpolarizing influence of the surrounding myocardium, except at a discrete number of sinoatrial exit pathways (SEPs). We propose a highly detailed 3D model of the human SAN, including 3D SEPs to study the requirements for successful electrical activation of the primary pacemaking structure of the human heart. A total of 788 simulations investigate the ability of the SAN to pace and drive with different heterogeneous characteristics of the nodal tissue (gradient and mosaic models) and myocyte orientation. A sigmoidal distribution of the tissue conductivity combined with a mosaic model of SAN and atrial cells in the SEP was able to drive the right atrium (RA) at varying rates induced by gradual If block. Additionally, we investigated the influence of the SEPs by varying their number, length, and width. SEPs created a transition zone of transmembrane voltage and ionic currents to enable successful pace and drive. Unsuccessful simulations showed a hyperpolarized transmembrane voltage (-66 mV), which blocked the L-type channels and attenuated the sodium-calcium exchanger. The fiber direction influenced the SEPs that preferentially activated the crista terminalis (CT). The location of the leading pacemaker site (LPS) shifted toward the SEP-free areas. LPSs were located closer to the SEP-free areas (3.46 ± 1.42 mm), where the hyperpolarizing influence of the CT was reduced, compared with a larger distance from the LPS to the areas where SEPs were located (7.17± 0.98 mm). This study identified the geometrical and electrophysiological aspects of the 3D SAN-SEP-CT structure required for successful pace and drive in silico.
Interatrial conduction block refers to a disturbance in the propagation of electrical impulses in the conduction pathways between the right and the left atrium. It is a risk factor for atrial fibrillation, stroke, and premature death. Clin- ical diagnostic criteria comprise an increased P wave dura- tion and biphasic P waves in lead II, III and aVF due to ret- rograde activation of the left atrium. Machine learning algo- rithms could improve the diagnosis but require a large-scale, well-controlled and balanced dataset. In silico electrocardio- gram (ECG) signals, optimally obtained from a statistical shape model to cover anatomical variability, carry the poten- tial to produce an extensive database meeting the requirements for successful machine learning application. We generated the first in silico dataset including interatrial conduction block of 9,800simulated ECG signals based on a bi-atrial statistical shape model. Automated feature analysis was performed to evaluate P wave morphology, duration and P wave terminal force in lead V1. Increased P wave duration and P wave ter- minal force in lead V1 were found for models with interatrial conduction block compared to healthy models. A wide vari- ability of P wave morphology was detected for models with in- teratrial conduction block. Contrary to previous assumptions, our results suggest that a biphasic P wave morphology seems to be neither necessary nor sufficient for the diagnosis of in- teratrial conduction block. The presented dataset is ready for a classification with machine learning algorithms and can be easily extended.
Atrial fibrillation is responsible for a significant and steadily rising burden. Simultaneously, the treatment options for atrial fibrillation are far from optimal. Personalized simulations of cardiac electrophysiology could assist clinicians in the risk stratification and therapy planning for atrial fibrillation. However, the use of personalized simulations in clinics is currently not possible due to either too high computational costs or non-sufficient accuracy. Eikonal simulations come with low computational costs but cannot replicate the influence of cardiac tissue geometry on the conduction velocity of the wave propagation. Consequently, they currently lack the required accuracy to be applied in clinics. Biophysically detailed simulations on the other hand are accurate but associated with too high computational costs. To tackle this issue, a regression model is created based on biophysically detailed bidomain simulation data. This regression formula calculates the conduction velocity dependent on the thickness and curvature of the heart wall. Afterwards the formula was implemented into the eikonal model with the goal to increase the accuracy of the eikonal model without losing its advantage of computational efficiency. The results of the modified eikonal simulations demonstrate that (i) the local activation times become significantly closer to those of the biophysically detailed bidomain simulations, (ii) the advantage of the eikonal model of a low sensitivity to the resolution of the mesh was reduced further, and (iii) the unrealistic occurrence of endo-epicardial dissociation in simulations was remedied. The results suggest that the accuracy of the eikonal model was significantly increased. At the same time, the additional computational costs caused by the implementation of the regression formula are neglectable. In conclusion, a successful step towards a more accurate and fast computational model of cardiac electrophysiology was achieved.
Craniosynostosis is a congenital disease character-ized by the premature closure of one or multiple sutures of theinfant’s skull. For diagnosis, 3D photogrammetric scans are aradiation-free alternative to computed tomography. However,data is only sparsely available and the role of data augmentation for the classification of craniosynostosis has not yet beenanalyzed.In this work, we use a 2D distance map representation ofthe infants’ heads with a convolutional-neural-network-basedclassifier and employ a generative adversarial network (GAN)for data augmentation. We simulate two data scarcity scenar-ios with 15 % and 10 % training data and test the influence ofdifferent degrees of added synthetic data and balancing under-represented classes. We used total accuracy and F1-score as ametric to evaluate the final classifiers.For 15 % training data, the GAN-augmented dataset showedan increased F1-score up to 0.1 and classification accuracy upto 3 %. For 10 % training data, both metrics decreased. We present a deep convolutional GAN capable of creatingsynthetic data for the classification of craniosynostosis. Us-ing a moderate amount of synthetic data using a GAN showedslightly better performance, but had little effect overall. Thesimulated scarcity scenario of 10 % training data may havelimited the model’s ability to learn the underlying data distribution.
Conduction velocity (CV) slowing is associated with atrial fibrillation (AF) and reentrant ventricular tachycardia (VT). Clinical electroanatomical mapping systems used to localize AF or VT sources as ablation targets remain limited by the number of measuring electrodes and signal processing methods to generate high-density local activation time (LAT) and CV maps of heterogeneous atrial or trabeculated ventricular endocardium. The morphology and amplitude of bipolar electrograms depend on the direction of propagating electrical wavefront, making identification of low-amplitude signal sources commonly associated with fibrotic area difficulty. In comparison, unipolar electrograms are not sensitive to wavefront direction, but measurements are susceptible to distal activity. This study proposes a method for local CV calculation from optical mapping measurements, termed the circle method (CM). The local CV is obtained as a weighted sum of CV values calculated along different chords spanning a circle of predefined radius centered at a CV measurement location. As a distinct maximum in LAT differences is along the chord normal to the propagating wavefront, the method is adaptive to the propagating wavefront direction changes, suitable for electrical conductivity characterization of heterogeneous myocardium. In numerical simulations, CM was validated characterizing modeled ablated areas as zones of distinct CV slowing. Experimentally, CM was used to characterize lesions created by radiofrequency ablation (RFA) on isolated hearts of rats, guinea pig, and explanted human hearts. To infer the depth of RFA-created lesions, excitation light bands of different penetration depths were used, and a beat-to-beat CV difference analysis was performed to identify CV alternans. Despite being limited to laboratory research, studies based on CM with optical mapping may lead to new translational insights into better-guided ablation therapies.
Aims Atrial flutter (AFlut) is a common re-entrant atrial tachycardia driven by self-sustainable mechanisms that cause excitations to propagate along pathways different from sinus rhythm. Intra-cardiac electrophysiological mapping and catheter ablation are often performed without detailed prior knowledge of the mechanism perpetuating AFlut, likely prolonging the procedure time of these invasive interventions. We sought to discriminate the AFlut location [cavotricuspid isthmus-dependent (CTI), peri-mitral, and other left atrium (LA) AFlut classes] with a machine learning-based algorithm using only the non-invasive signals from the 12-lead electrocardiogram (ECG). Methods and results Hybrid 12-lead ECG dataset of 1769 signals was used (1424 in silico ECGs, and 345 clinical ECGs from 115 patients—three different ECG segments over time were extracted from each patient corresponding to single AFlut cycles). Seventy-seven features were extracted. A decision tree classifier with a hold-out classification approach was trained, validated, and tested on the dataset randomly split after selecting the most informative features. The clinical test set comprised 38 patients (114 clinical ECGs). The classifier yielded 76.3% accuracy on the clinical test set with a sensitivity of 89.7%, 75.0%, and 64.1% and a positive predictive value of 71.4%, 75.0%, and 86.2% for CTI, peri-mitral, and other LA class, respectively. Considering majority vote of the three segments taken from each patient, the CTI class was correctly classified at 92%. Conclusion Our results show that a machine learning classifier relying only on non-invasive signals can potentially identify the location of AFlut mechanisms. This method could aid in planning and tailoring patient-specific AFlut treatments.
L. Krames, P. Suppa, and W. Nahm. Does the 3D Feature Descriptor Impact The Registration Accuracy in Laparoscopic Liver Surgery?. In Current Directions in Biomedical Engineering, vol. 8(1) , pp. 46-49, 2022
In laparoscopic liver surgery (LLS) image-guidednavigation systems could support the surgeon by providingsubsurface information such as the positions of tumors andvessels. For this purpose, one option is to perform a registra-tion of preoperative 3D data and 3D surface patches recon-structed from laparoscopic images. Part of an automatic 3Dregistration pipeline is the feature description, which takes intoaccount various geometric and spatial information. Since thereis no leading feature descriptor in the field of LLS, two featuredescriptors are compared in this paper: The Fast Point FeatureHistogram (FPFH) and Triple Orthogonal Local Depth Images(TOLDI). To evaluate their performance, three perturbationswere induced: varying surface patch sizes, spatial displace-ment, and Gaussian deformation. Registration was performedusing the RANSAC algorithm. FPFH outperformed TOLDIfor small surface patches and in case of Gaussian deformationsin terms of registration accuracy. In contrast, TOLDI showedlower registration errors for patches with spatial displacement.While developing a 3D-3D registration pipeline, the choice ofthe feature descriptor is of importance, consequently a carefulchoice suitable for the application in LLS is necessary.
Background: Craniosynostosis is a condition caused by the premature fusion of skull sutures, leading to irregular growth patterns of the head. Three-dimensional photogrammetry is a radiation-free alternative to the diagnosis using computed tomography. While statistical shape models have been proposed to quantify head shape, no shape-model-based classification approach has been presented yet. Methods: We present a classification pipeline that enables an automated diagnosis of three types of craniosynostosis. The pipeline is based on a statistical shape model built from photogrammetric surface scans. We made the model and pathology-specific submodels publicly available, making it the first publicly available craniosynostosis-related head model, as well as the first focusing on infants younger than 1.5 years. To the best of our knowledge, we performed the largest classification study for craniosynostosis to date. Results: Our classification approach yields an accuracy of 97.8 %, comparable to other state-of-the-art methods using both computed tomography scans and stereophotogrammetry. Regarding the statistical shape model, we demonstrate that our model performs similar to other statistical shape models of the human head. Conclusion: We present a state-of-the-art shape-model-based classification approach for a radiation-free diagnosis of craniosynostosis. Our publicly available shape model enables the assessment of craniosynostosis on realistic and synthetic data.
Objective: To investigate cardiac activation maps estimated using electrocardiographic imaging and to find methods reducing line-of-block (LoB) artifacts, while preserving real LoBs. Methods: Body surface potentials were computed for 137 simulated ventricular excitations. Subsequently, the inverse problem was solved to obtain extracellular potentials (EP) and transmembrane voltages (TMV). From these, activation times (AT) were estimated using four methods and compared to the ground truth. This process was evaluated with two cardiac mesh resolutions. Factors contributing to LoB artifacts were identified by analyzing the impact of spatial and temporal smoothing on the morphology of source signals. Results: AT estimation using a spatiotemporal derivative performed better than using a temporal derivative. Compared to deflection-based AT estimation, correlation-based methods were less prone to LoB artifacts but performed worse in identifying real LoBs. Temporal smoothing could eliminate artifacts for TMVs but not for EPs, which could be linked to their temporal morphology. TMVs led to more accurate ATs on the septum than EPs. Mesh resolution had a negligible effect on inverse reconstructions, but small distances were important for cross-correlation-based estimation of AT delays. Conclusion: LoB artifacts are mainly caused by the inherent spatial smoothing effect of the inverse reconstruction. Among the configurations evaluated, only deflection-based AT estimation in combination with TMVs and strong temporal smoothing can prevent LoB artifacts, while preserving real LoBs. Significance: Regions of slow conduction are of considerable clinical interest and LoB artifacts observed in non-invasive ATs can lead to misinterpretations. We addressed this problem by identifying factors causing such artifacts and methods to reduce them.
J. Sánchez, and A. Loewe. A Review of Healthy and Fibrotic Myocardium Microstructure Modeling and Corresponding Intracardiac Electrograms. In Frontiers in Physiology, vol. 13, 2022
Computational simulations of cardiac electrophysiology provide detailed information on the depolarization phenomena at different spatial and temporal scales. With the development of new hardware and software, in silico experiments have gained more importance in cardiac electrophysiology research. For plane waves in healthy tissue, in vivo and in silico electrograms at the surface of the tissue demonstrate symmetric morphology and high peak-to-peak amplitude. Simulations provided insight into the factors that alter the morphology and amplitude of the electrograms. The situation is more complex in remodeled tissue with fibrotic infiltrations. Clinically, different changes including fractionation of the signal, extended duration and reduced amplitude have been described. In silico, numerous approaches have been proposed to represent the pathological changes on different spatial and functional scales. Different modeling approaches can reproduce distinct subsets of the clinically observed electrogram phenomena. This review provides an overview of how different modeling approaches to incorporate fibrotic and structural remodeling affect the electrogram and highlights open challenges to be addressed in future research.
Cardiac resynchronization therapy is a valuable tool to restore left ventricular function in patients experiencing dyssynchronous ventricular activation. However, the non-responder rate is still as high as 40%. Recent studies suggest that left ventricular torsion or specifically the lack thereof might be a good predictor for the response of cardiac resynchronization therapy. Since left ventricular torsion is governed by the muscle fiber orientation and the heterogeneous electromechanical activation of the myocardium, understanding the relation between these components and the ability to measure them is vital. To analyze if locally altered electromechanical activation in heart failure patients affects left ventricular torsion, we conducted a simulation study on 27 personalized left ventricular models. Electroanatomical maps and late gadolinium enhanced magnetic resonance imaging data informed our in-silico model cohort. The angle of rotation was evaluated in every material point of the model and averaged values were used to classify the rotation as clockwise or counterclockwise in each segment and sector of the left ventricle. 88% of the patient models (n = 24) were classified as a wringing rotation and 12% (n = 3) as a rigid-body-type rotation. Comparison to classification based on in vivo rotational NOGA XP maps showed no correlation. Thus, isolated changes of the electromechanical activation sequence in the left ventricle are not sufficient to reproduce the rotation pattern changes observed in vivo and suggest that further patho-mechanisms are involved.
C. Nagel, M. Schaufelberger, O. Dössel, and A. Loewe. A Bi-atrial Statistical Shape Model as a Basis to Classify Left Atrial Enlargement from Simulated and Clinical 12-Lead ECGs. In Statistical Atlases and Computational Models of the Heart. Multi-Disease, Multi-View, and Multi-Center Right Ventricular Segmentation in Cardiac MRI Challenge, vol. 13131, pp. 38-47, 2022
Left atrial enlargement (LAE) is one of the risk factors for atrial fibrillation (AF). A non-invasive and automated detection of LAE with the 12-lead electrocardiogram (ECG) could therefore contribute to an improved AF risk stratification and an early detection of new-onset AF incidents. However, one major challenge when applying machine learning techniques to identify and classify cardiac diseases usually lies in the lack of large, reliably labeled and balanced clinical datasets. We therefore examined if the extension of clinical training data by simulated ECGs derived from a novel bi-atrial shape model could improve the automated detection of LAE based on P waves of the 12-lead ECG. We derived 95 volumetric geometries from the bi-atrial statistical shape model with continuously increasing left atrial volumes in the range of 30 ml to 65 ml. Electrophysiological simulations with 10 different conduction velocity settings and 2 different torso models were conducted. Extracting the P waves of the 12-lead ECG thus yielded a synthetic dataset of 1,900 signals. Besides the simulated data, 7,168 healthy and 309 LAE ECGs from a public clinical ECG database were available for training and testing of an LSTM network to identify LAE. The class imbalance of the training data could be reduced from 1:23 to 1:6 when adding simulated data to the training set. The accuracy evaluated on the test dataset comprising a subset of the clinical ECG recordings improved from 0.91 to 0.95 if simulated ECGs were included as an additional input for the training of the classifier. Our results suggest that using a bi-atrial statistical shape model as a basis for ECG simulations can help to overcome the drawbacks of clinical ECG recordings and can thus lead to an improved performance of machine learning classifiers to detect LAE based on the 12-lead ECG.
T. Zheng, L. Azzolin, J. Sánchez, O. Dössel, and A. Loewe. An automate pipeline for generating fiber orientation and region annotation in patient specific atrial models. In Current Directions in Biomedical Engineering, vol. 7(2) , pp. 136-139, 2021
A. Wachter, and W. Nahm. Workflow Augmentation of Video Data for Event Recognition with Time-Sensitive Neural Networks. In eprint, 2021
Supervised training of neural networks requires large, diverse and well annotated data sets. In the medical field, this is often difficult to achieve due to constraints in time, expert knowledge and prevalence of an event. Artificial data augmentation can help to prevent overfitting and improve the detection of rare events as well as overall performance. However, most augmentation techniques use purely spatial transformations, which are not sufficient for video data with temporal correlations. In this paper, we present a novel methodology for workflow augmentation and demonstrate its benefit for event recognition in cataract surgery. The proposed approach increases the frequency of event alternation by creating artificial videos. The original video is split into event segments and a workflow graph is extracted from the original annotations. Finally, the segments are assembled into new videos based on the workflow graph. Compared to the original videos, the frequency of event alternation in the augmented cataract surgery videos increased by 26%. Further, a 3% higher classification accuracy and a 7.8% higher precision was achieved compared to a state-of-the-art approach. Our approach is particularly helpful to increase the occurrence of rare but important events and can be applied to a large variety of use cases.
BACKGROUND AND OBJECTIVE: Cardiac electrophysiology is a medical specialty with a long and rich tradition of computational modeling. Nevertheless, no community standard for cardiac electrophysiology simulation software has evolved yet. Here, we present the openCARP simulation environment as one solution that could foster the needs of large parts of this community. METHODS AND RESULTS: openCARP and the Python-based carputils framework allow developing and sharing simulation pipelines which automate in silico experiments including all modeling and simulation steps to increase reproducibility and productivity. The continuously expanding openCARP user community is supported by tailored infrastructure. Documentation and training material facilitate access to this complementary research tool for new users. After a brief historic review, this paper summarizes requirements for a high-usability electrophysiology simulator and describes how openCARP fulfills them. We introduce the openCARP modeling workflow in a multi-scale example of atrial fibrillation simulations on single cell, tissue, organ and body level and finally outline future development potential. CONCLUSION: As an open simulator, openCARP can advance the computational cardiac electrophysiology field by making state-of-the-art simulations accessible. In combination with the carputils framework, it offers a tailored software solution for the scientific community and contributes towards increasing use, transparency, standardization and reproducibility of in silico experiments.
H. Anzt, and A. Loewe. Forschungssoftware – Nachhaltige Entwicklung und Bereitstellung. In Forschung & Lehre, vol. 28(5) , pp. 380-381, 2021
C. Nagel, G. Luongo, L. Azzolin, S. Schuler, O. Dössel, and A. Loewe. Non-Invasive and Quantitative Estimation of Left Atrial Fibrosis Based on P Waves of the 12-Lead ECG—A Large-Scale Computational Study Covering Anatomical Variability. In Journal of Clinical Medicine, vol. 10(8) , pp. 1797, 2021
The arrhythmogenesis of atrial fibrillation is associated with the presence of fibrotic atrial tissue. Not only fibrosis but also physiological anatomical variability of the atria and the thorax reflect in altered morphology of the P wave in the 12-lead electrocardiogram (ECG). Distinguishing between the effects on the P wave induced by local atrial substrate changes and those caused by healthy anatomical variations is important to gauge the potential of the 12-lead ECG as a non-invasive and cost-effective tool for the early detection of fibrotic atrial cardiomyopathy to stratify atrial fibrillation propensity. In this work, we realized 54,000 combinations of different atria and thorax geometries from statistical shape models capturing anatomical variability in the general population. For each atrial model, 10 different volume fractions (0-45%) were defined as fibrotic. Electrophysiological simulations in sinus rhythm were conducted for each model combination and the respective 12-lead ECGs were computed. P wave features (duration, amplitude, dispersion, terminal force in V1) were extracted and compared between the healthy and the diseased model cohorts. All investigated feature values systematically in- or decreased with the left atrial volume fraction covered by fibrotic tissue, however value ranges overlapped between the healthy and the diseased cohort. Using all extracted P wave features as input values, the amount of the fibrotic left atrial volume fraction was estimated by a neural network with an absolute root mean square error of 8.78%. Our simulation results suggest that although all investigated P wave features highly vary for different anatomical properties, the combination of these features can contribute to non-invasively estimate the volume fraction of atrial fibrosis using ECG-based machine learning approaches.
AIMS: The treatment of atrial fibrillation beyond pulmonary vein isolation has remained an unsolved challenge. Targeting regions identified by different substrate mapping approaches for ablation resulted in ambiguous outcomes. With the effective refractory period being a fundamental prerequisite for the maintenance of fibrillatory conduction, this study aims at estimating the effective refractory period with clinically available measurements. METHODS AND RESULTS: A set of 240 simulations in a spherical model of the left atrium with varying model initialization, combination of cellular refractory properties, and size of a region of lowered effective refractory period was implemented to analyse the capabilities and limitations of cycle length mapping. The minimum observed cycle length and the 25% quantile were compared to the underlying effective refractory period. The density of phase singularities was used as a measure for the complexity of the excitation pattern. Finally, we employed the method in a clinical test of concept including five patients. Areas of lowered effective refractory period could be distinguished from their surroundings in simulated scenarios with successfully induced multi-wavelet re-entry. Larger areas and higher gradients in effective refractory period as well as complex activation patterns favour the method. The 25% quantile of cycle lengths in patients with persistent atrial fibrillation was found to range from 85 to 190 ms. CONCLUSION: Cycle length mapping is capable of highlighting regions of pathologic refractory properties. In combination with complementary substrate mapping approaches, the method fosters confidence to enhance the treatment of atrial fibrillation beyond pulmonary vein isolation particularly in patients with complex activation patterns.
Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.
OBJECTIVE: Atrial flutter (AFl) is a common arrhythmia that can be categorized according to different self-sustained electrophysiological mechanisms. The non-invasive discrimination of such mechanisms would greatly benefit ablative methods for AFl therapy as the driving mechanisms would be described prior to the invasive procedure, helping to guide ablation. In the present work, we sought to implement recurrence quantification analysis (RQA) on 12-lead ECG signals from a computational framework to discriminate different electrophysiological mechanisms sustaining AFl. METHODS: 20 different AFl mechanisms were generated in 8 atrial models and were propagated into 8 torso models via forward solution, resulting in 1,256 sets of 12-lead ECG signals. Principal component analysis was applied on the 12-lead ECGs, and six RQA-based features were extracted from the most significant principal component scores in two different approaches: individual component RQA and spatial reduced RQA. RESULTS: In both approaches, RQA-based features were significantly sensitive to the dynamic structures underlying different AFl mechanisms. Hit rate as high as 67.7% was achieved when discriminating the 20 AFl mechanisms. RQA-based features estimated for a clinical sample suggested high agreement with the results found in the computational framework. CONCLUSION: RQA has been shown an effective method to distinguish different AFl electrophysiological mechanisms in a non-invasive computational framework. A clinical 12-lead ECG used as proof of concept showed the value of both the simulations and the methods. SIGNIFICANCE: The non-invasive discrimination of AFl mechanisms helps to delineate the ablation strategy, reducing time and resources required to conduct invasive cardiac mapping and ablation procedures.
Acute ischemic stroke is a major health problem with a high mortality rate and a high risk for permanent disabilities. Selective brain hypothermia has the neuroprotective potential to possibly lower cerebral harm. A recently developed catheter system enables to combine endovascular blood cooling and thrombectomy using the same endovascular access. By using the penumbral perfusion via leptomeningeal collaterals, the catheter aims at enabling a cold reperfusion, which mitigates the risk of a reperfusion injury. However, cerebral circulation is highly patient-specific and can vary greatly. Since direct measurement of remaining perfusion and temperature decrease induced by the catheter is not possible without additional harm to the patient, computational modeling provides an alternative to gain knowledge about resulting cerebral temperature decrease. In this work, we present a brain temperature model with a realistic division into gray and white matter and consideration of spatially resolved perfusion. Furthermore, it includes detailed anatomy of cerebral circulation with possibility of personalizing on base of real patient anatomy. For evaluation of catheter performance in terms of cold reperfusion and to analyze its general performance, we calculated the decrease in brain temperature in case of a large vessel occlusion in the middle cerebral artery (MCA) for different scenarios of cerebral arterial anatomy. Congenital arterial variations in the circle of Willis had a distinct influence on the cooling effect and the resulting spatial temperature distribution before vessel recanalization. Independent of the branching configurations, the model predicted a cold reperfusion due to a strong temperature decrease after recanalization (1.4-2.2 C after 25 min of cooling, recanalization after 20 min of cooling). Our model illustrates the effectiveness of endovascular cooling in combination with mechanical thrombectomy and its results serve as an adequate substitute for temperature measurement in a clinical setting in the absence of direct intraparenchymal temperature probes.
The contraction of the human heart is a complex process as a consequence of the interaction of internal and external forces. In current clinical routine, the resulting deformation can be imaged during an entire heart beat. However, the active tension development cannot be measured in vivo but may provide valuable diagnostic information. In this work, we present a novel numerical method for solving an inverse problem of cardiac biomechanics-estimating the dynamic active tension field, provided the motion of the myocardial wall is known. This ill-posed non-linear problem is solved using second order Tikhonov regularization in space and time. We conducted a sensitivity analysis by varying the fiber orientation in the range of measurement accuracy. To achieve RMSE <20% of the maximal tension, the fiber orientation needs to be provided with an accuracy of 10°. Also, variation was added to the deformation data in the range of segmentation accuracy. Here, imposing temporal regularization led to an eightfold decrease in the error down to 12%. Furthermore, non-contracting regions representing myocardial infarct scars were introduced in the left ventricle and could be identified accurately in the inverse solution (sensitivity >0.95). The results obtained with non-matching input data are promising and indicate directions for further improvement of the method. In future, this method will be extended to estimate the active tension field based on motion data from clinical images, which could provide important insights in terms of a new diagnostic tool for the identification and treatment of diseased heart tissue.
Background: Hypertrophic cardiomyopathy (HCM) is typically caused by mutations in sarcomeric genes leading to cardiomyocyte disarray, replacement fibrosis, impaired contractility, and elevated filling pressures. These varying tissue properties are associ- ated with certain strain patterns that may allow to establish a diagnosis by means of non-invasive imaging without the necessity of harmful myocardial biopsies or con- trast agent application. With a numerical study, we aim to answer: how the variability in each of these mechanisms contributes to altered mechanics of the left ventricle (LV) and if the deformation obtained in in-silico experiments is comparable to values reported from clinical measurements. Methods: We conducted an in-silico sensitivity study on physiological and pathologi- cal mechanisms potentially underlying the clinical HCM phenotype. The deformation of the four-chamber heart models was simulated using a finite-element mechanical solver with a sliding boundary condition to mimic the tissue surrounding the heart. Furthermore, a closed-loop circulatory model delivered the pressure values acting on the endocardium. Deformation measures and mechanical behavior of the heart mod- els were evaluated globally and regionally. Results: Hypertrophy of the LV affected the course of strain, strain rate, and wall thickening—the root-mean-squared difference of the wall thickening between control (mean thickness 10 mm) and hypertrophic geometries (17 mm) was >10%. A reduc- tion of active force development by 40% led to less overall deformation: maximal radial strain reduced from 26 to 21%. A fivefold increase in tissue stiffness caused a more homogeneous distribution of the strain values among 17 heart segments. Fiber disarray led to minor changes in the circumferential and radial strain. A combination of pathological mechanisms led to reduced and slower deformation of the LV and halved the longitudinal shortening of the LA. Conclusions: This study uses a computer model to determine the changes in LV deformation caused by pathological mechanisms that are presumed to underlay HCM. This knowledge can complement imaging-derived information to obtain a more accu- rate diagnosis of HCM.
The term "In Silico Trial" indicates the use of computer modelling and simulation to evaluate the safety and efficacy of a medical product, whether a drug, a medical device, a diagnostic product or an advanced therapy medicinal product. Predictive models are positioned as new methodologies for the development and the regulatory evaluation of medical products. New methodologies are qualified by regulators such as FDA and EMA through formal processes, where a first step is the definition of the Context of Use (CoU), which is a concise description of how the new methodology is intended to be used in the development and regulatory assessment process. As In Silico Trials are a disruptively innovative class of new methodologies, it is important to have a list of possible CoUs highlighting potential applications for the development of the relative regulatory science. This review paper presents the result of a consensus process that took place in the InSilicoWorld Community of Practice, an online forum for experts in in silico medicine. The experts involved identified 46 descriptions of possible CoUs which were organised into a candidate taxonomy of nine CoU categories. Examples of 31 CoUs were identified in the available literature; the remaining 15 should, for now, be considered speculative.
L. Azzolin, S. Schuler, O. Dössel, and A. Loewe. A Reproducible Protocol to Assess Arrhythmia Vulnerability : Pacing at the End of the Effective Refractory Period.. In Frontiers in Physiology, vol. 12, pp. 656411, 2021
In both clinical and computational studies, different pacing protocols are used to induce arrhythmia and non-inducibility is often considered as the endpoint of treatment. The need for a standardized methodology is urgent since the choice of the protocol used to induce arrhythmia could lead to contrasting results, e.g., in assessing atrial fibrillation (AF) vulnerabilty. Therefore, we propose a novel method-pacing at the end of the effective refractory period (PEERP)-and compare it to state-of-the-art protocols, such as phase singularity distribution (PSD) and rapid pacing (RP) in a computational study. All methods were tested by pacing from evenly distributed endocardial points at 1 cm inter-point distance in two bi-atrial geometries. Seven different atrial models were implemented: five cases without specific AF-induced remodeling but with decreasing global conduction velocity and two persistent AF cases with an increasing amount of fibrosis resembling different substrate remodeling stages. Compared with PSD and RP, PEERP induced a larger variety of arrhythmia complexity requiring, on average, only 2.7 extra-stimuli and 3 s of simulation time to initiate reentry. Moreover, PEERP and PSD were the protocols which unveiled a larger number of areas vulnerable to sustain stable long living reentries compared to RP. Finally, PEERP can foster standardization and reproducibility, since, in contrast to the other protocols, it is a parameter-free method. Furthermore, we discuss its clinical applicability. We conclude that the choice of the inducing protocol has an influence on both initiation and maintenance of AF and we propose and provide PEERP as a reproducible method to assess arrhythmia vulnerability.
A. Wachter, J. Kost, and W. Nahm. Simulation-Based Estimation of the Number of Cameras Required for 3D Reconstruction in a Narrow-Baseline Multi-Camera Setup. In Journal of Imaging, vol. 7(5) , pp. 87, 2021
Graphical visualization systems are a common clinical tool for displaying digital images and three-dimensional volumetric data. These systems provide a broad spectrum of information to support physicians in their clinical routine. For example, the field of radiology enjoys unrestricted options for interaction with the data, since information is pre-recorded and available entirely in digital form. However, some fields, such as microsurgery, do not benefit from this yet. Microscopes, endoscopes, and laparoscopes show the surgical site as it is. To allow free data manipulation and information fusion, 3D digitization of surgical sites is required. We aimed to find the number of cameras needed to add this functionality to surgical microscopes. For this, we performed in silico simulations of the 3D reconstruction of representative models of microsurgical sites with different numbers of cameras in narrow-baseline setups. Our results show that eight independent camera views are preferable, while at least four are necessary for a digital surgical site. In most cases, eight cameras allow the reconstruction of over 99% of the visible part. With four cameras, still over 95% can be achieved. This answers one of the key questions for the development of a prototype microscope. In future, such a system can provide functionality which is unattainable today
Background: Rate-varying S1S2 stimulation protocols can be used for restitution studies to characterize atrial substrate, ionic remodeling, and atrial fibrillation risk. Clinical restitution studies with numerous patients create large amounts of these data. Thus, an automated pipeline to evaluate clinically acquired S1S2 stimulation protocol data necessitates consistent, robust, reproducible, and precise evaluation of local activation times, electrogram amplitude, and conduction velocity. Here, we present the CVAR-Seg pipeline, developed focusing on three challenges: (i) No previous knowledge of the stimulation parameters is available, thus, arbitrary protocols are supported. (ii) The pipeline remains robust under different noise conditions. (iii) The pipeline supports segmentation of atrial activities in close temporal proximity to the stimulation artifact, which is challenging due to larger amplitude and slope of the stimulus compared to the atrial activity. Methods and Results: The S1 basic cycle length was estimated by time interval detection. Stimulation time windows were segmented by detecting synchronous peaks in different channels surpassing an amplitude threshold and identifying time intervals between detected stimuli. Elimination of the stimulation artifact by a matched filter allowed detection of local activation times in temporal proximity. A non-linear signal energy operator was used to segment periods of atrial activity. Geodesic and Euclidean inter electrode distances allowed approximation of conduction velocity. The automatic segmentation performance of the CVAR-Seg pipeline was evaluated on 37 synthetic datasets with decreasing signal-to-noise ratios. Noise was modeled by reconstructing the frequency spectrum of clinical noise. The pipeline retained a median local activation time error below a single sample (1 ms) for signal-to-noise ratios as low as 0 dB representing a high clinical noise level. As a proof of concept, the pipeline was tested on a CARTO case of a paroxysmal atrial fibrillation patient and yielded plausible restitution curves for conduction speed and amplitude. Conclusion: The proposed openly available CVAR-Seg pipeline promises fast, fully automated, robust, and accurate evaluations of atrial signals even with low signal-to-noise ratios. This is achieved by solving the proximity problem of stimulation and atrial activity to enable standardized evaluation without introducing human bias for large data sets.
A. Naber, M. Reiß, and W. Nahm. Transit Time Measurement in Indicator Dilution Curves: Overcoming the Missing Ground Truth and Quantifying the Error. In Frontiers in Physiology, vol. 12, pp. 1-16, 2021
The vascular function of a vessel can be qualitatively and intraoperatively checked by recording the blood dynamics inside the vessel via fluorescence angiography (FA). Although FA is the state of the art in proving the existence of blood flow during interventions such as bypass surgery, it still lacks a quantitative blood flow measurement that could decrease the recurrence rate and postsurgical mortality. Previous approaches show that the measured flow has a significant deviation compared to the gold standard reference (ultrasonic flow meter). In order to systematically address the possible sources of error, we investigated the error in transit time measurement of an indicator. Obtaining in vivo indicator dilution curves with a known ground truth is complex and often not possible. Further, the error in transit time measurement should be quantified and reduced. To tackle both issues, we first computed many diverse indicator dilution curves using an in silico simulation of the indicator’s flow. Second, we post-processed these curves to mimic measured signals. Finally, we fitted mathematical models (parabola, gamma variate, local density random walk, and mono-exponential model) to re-continualize the obtained discrete indicator dilution curves and calculate the time delay of two analytical functions. This re-continualization showed an increase in the temporal accuracy up to a sub-sample accuracy. Thereby, the Local Density Random Walk (LDRW) model performed best using the cross-correlation of the first derivative of both indicator curves with a cutting of the data at 40% of the peak intensity. The error in frames depends on the noise level and is for a signal-to-noise ratio (SNR) of 20dB and a sampling rate of fs = 60 Hz at f−1 · 0.25(±0.18), so this error is smaller than the distance between two consecutive s samples. The accurate determination of the transit time and the quantification of the error allow the calculation of the error propagation onto the flow measurement. Both can assist surgeons as an intraoperative quality check and thereby reduce the recurrence rate and post-surgical mortality.
Mathematical models of the human heart are evolving to become a cornerstone of precision medicine and support clinical decision making by providing a powerful tool to understand the mechanisms underlying pathophysiological conditions. In this study, we present a detailed mathematical description of a fully coupled multi-scale model of the human heart, including electrophysiology, mechanics, and a closed-loop model of circulation. State-of-the-art models based on human physiology are used to describe membrane kinetics, excitation-contraction coupling and active tension generation in the atria and the ventricles. Furthermore, we highlight ways to adapt this framework to patient specific measurements to build digital twins. The validity of the model is demonstrated through simulations on a personalized whole heart geometry based on magnetic resonance imaging data of a healthy volunteer. Additionally, the fully coupled model was employed to evaluate the effects of a typical atrial ablation scar on the cardiovascular system. With this work, we provide an adaptable multi-scale model that allows a comprehensive personalization from ion channels to the organ level enabling digital twin modeling
In patients with atrial fibrillation, intracardiac electrogram signal amplitude is known to decrease with increased structural tissue remodeling, referred to as fibrosis. In addition to the isolation of the pulmonary veins, fibrotic sites are considered a suitable target for catheter ablation. However, it remains an open challenge to find fibrotic areas and to differentiate their density and transmurality. This study aims to identify the volume fraction and transmurality of fibrosis in the atrial substrate. Simulated cardiac electrograms, combined with a generalized model of clinical noise, reproduce clinically measured signals. Our hybrid dataset approach combines and clinical electrograms to train a decision tree classifier to characterize the fibrotic atrial substrate. This approach captures different dynamics of the electrical propagation reflected on healthy electrogram morphology and synergistically combines it with synthetic fibrotic electrograms from experiments. The machine learning algorithm was tested on five patients and compared against clinical voltage maps as a proof of concept, distinguishing non-fibrotic from fibrotic tissue and characterizing the patient's fibrotic tissue in terms of density and transmurality. The proposed approach can be used to overcome a single voltage cut-off value to identify fibrotic tissue and guide ablation targeting fibrotic areas.
Aims Atrial cardiomyopathy (ACM) is associated with new-onset atrial fibrillation, arrhythmia recurrence after pulmonary vein isolation (PVI) and increased risk for stroke. At present, diagnosis of ACM is feasible by endocardial contact mapping of left atrial (LA) low-voltage substrate (LVS) or late gadolinium-enhanced magnetic resonance imaging, but their complexity limits a widespread use. The aim of this study was to assess non-invasive body surface electrocardiographic imaging (ECGI) as a novel clinical tool for diagnosis of ACM compared with endocardial mapping. Methods and results Thirty-nine consecutive patients (66 ± 9 years, 85% male) presenting for their first PVI for persistent atrial fibrillation underwent ECGI in sinus rhythm using a 252-electrode-array mapping system. Subsequently, high-density LA voltage and biatrial activation maps (mean 2090 ± 488 sites) were acquired in sinus rhythm prior to PVI. Freedom from arrhythmia recurrence was assessed within 12 months follow-up. Increased duration of total atrial conduction time (TACT) in ECGI was associated with both increased atrial activation time and extent of LA-LVS in endocardial contact mapping (r = 0.77 and r = 0.66, P < 0.0001 respectively). Atrial cardiomyopathy was found in 23 (59%) patients. A TACT value of 148 ms identified ACM with 91.3% sensitivity and 93.7% specificity. Arrhythmia recurrence occurred in 15 (38%) patients during a follow-up of 389 ± 55 days. Freedom from arrhythmia was significantly higher in patients with a TACT <148 ms compared with patients with a TACT ≥148 ms (82.4% vs. 45.5%, P = 0.019). Conclusion Analysis of TACT in non-invasive ECGI allows diagnosis of patients with ACM, which is associated with a significantly increased risk for arrhythmia recurrence following PVI.
S. Schuler, N. Pilia, D. Potyagaylo, and A. Loewe. Cobiveco: Consistent biventricular coordinates for precise and intuitive description of position in the heart – with MATLAB implementation. In Medical Image Analysis, vol. 74, pp. 102247, 2021
Ventricular coordinates are widely used as a versatile tool for various applications that benefit from a description of local position within the heart. However, the practical usefulness of ventricular coordinates is determined by their ability to meet application-specific requirements. For regression-based estimation of biventricular position, for example, a symmetric definition of coordinate directions in both ventricles is important. For the transfer of data between different hearts as another use case, the consistency of coordinate values across different geometries is particularly relevant. To meet these requirements, we compare different approaches to compute coordinates and present Cobiveco, a symmetric, consistent and intuitive biventricular coordinate system that builds upon existing coordinate systems, but overcomes some of their limitations. A novel one-way transfer error is introduced to assess the consistency of the coordinates. Normalized distances along bijective trajectories between two boundaries were found to be superior to solutions of Laplace’s equation for defining coordinate values, as they show better linearity in space. Evaluation of transfer and linearity errors on 36 patient geometries revealed a more than 4-fold improvement compared to a state-of-the-art method. Finally, we show two application examples underlining the relevance for cardiac data processing. Cobiveco MATLAB code is available under a permissive open-source license.
The ECG is one of the most commonly used non-invasive tools to gain insights into the electrical functioning of the heart. It has been crucial as a foundation in the creation and validation of in silico models describing the underlying electrophysiological processes. However, so far, the contraction of the heart and its influences on the ECG have mainly been overlooked in in silico models. As the heart contracts and moves, so do the electrical sources within the heart responsible for the signal on the body surface, thus potentially altering the ECG. To illuminate these aspects, we developed a human 4-chamber electro-mechanically coupled whole heart in silico model and embedded it within a torso model. Our model faithfully reproduces measured 12-lead ECG traces, circulatory characteristics, as well as physiological ventricular rotation and atrioventricular valve plane displacement. We compare our dynamic model to three non-deforming ones in terms of standard clinically used ECG leads (Einthoven and Wilson) and body surface potential maps (BSPM). The non-deforming models consider the heart at its ventricular end-diastatic, end-diastolic and end-systolic states. The standard leads show negligible differences during P-Wave and QRS-Complex, yet during T-Wave the leads closest to the heart show prominent differences in amplitude. When looking at the BSPM, there are no notable differences during the P-Wave, but effects of cardiac motion can be observed already during the QRS-Complex, increasing further during the T-Wave. We conclude that for the modeling of activation (P-Wave/QRS-Complex), the associated effort of simulating a complete electro-mechanical approach is not worth the computational cost. But when looking at ventricular repolarization (T-Wave) in standard leads as well as BSPM, there are areas where the signal can be influenced by cardiac motion of the heart to an extent that should not be ignored.
We aim to provide a critical appraisal of basic concepts underlying signal recording and processing technologies applied for (i) atrial fibrillation (AF) mapping to unravel AF mechanisms and/or identifying target sites for AF therapy and (ii) AF detection, to optimize usage of technologies, stimulate research aimed at closing knowledge gaps, and developing ideal AF recording and processing technologies. Recording and processing techniques for assessment of electrical activity during AF essential for diagnosis and guiding ablative therapy including body surface electrocardiograms (ECG) and endo- or epicardial electrograms (EGM) are evaluated. Discussion of (i) differences in uni-, bi-, and multi-polar (omnipolar/Laplacian) recording modes, (ii) impact of recording technologies on EGM morphology, (iii) global or local mapping using various types of EGM involving signal processing techniques including isochronal-, voltage- fractionation-, dipole density-, and rotor mapping, enabling derivation of parameters like atrial rate, entropy, conduction velocity/direction, (iv) value of epicardial and optical mapping, (v) AF detection by cardiac implantable electronic devices containing various detection algorithms applicable to stored EGMs, (vi) contribution of machine learning (ML) to further improvement of signals processing technologies. Recording and processing of EGM (or ECG) are the cornerstones of (body surface) mapping of AF. Currently available AF recording and processing technologies are mainly restricted to specific applications or have technological limitations. Improvements in AF mapping by obtaining highest fidelity source signals (e.g. catheter–electrode combinations) for signal processing (e.g. filtering, digitization, and noise elimination) is of utmost importance. Novel acquisition instruments (multi-polar catheters combined with improved physical modelling and ML techniques) will enable enhanced and automated interpretation of EGM recordings in the near future.
Atrial flutter (AFL) is a common atrial arrhythmia typically characterized by electrical activity propagating around specific anatomical regions. It is usually treated with catheter ablation. However, the identification of rotational activities is not straightforward, and requires an intense effort during the first phase of the electrophysiological (EP) study, i.e., the mapping phase, in which an anatomical 3D model is built and electrograms (EGMs) are recorded. In this study, we modeled the electrical propagation pattern of AFL (measured during mapping) using network theory (NT), a well-known field of research from the computer science domain. The main advantage of NT is the large number of available algorithms that can efficiently analyze the network. Using directed network mapping, we employed a cycle-finding algorithm to detect all cycles in the network, resembling the main propagation pattern of AFL. The method was tested on two subjects in sinus rhythm, six in an experimental model of in-silico simulations, and 10 subjects diagnosed with AFL who underwent a catheter ablation. The algorithm correctly detected the electrical propagation of both sinus rhythm cases and in-silico simulations. Regarding the AFL cases, arrhythmia mechanisms were either totally or partially identified in most of the cases (8 out of 10), i.e., cycles around the mitral valve, tricuspid valve and figure-of-eight reentries. The other two cases presented a poor mapping quality or a major complexity related to previous ablations, large areas of fibrotic tissue, etc. Directed network mapping represents an innovative tool that showed promising results in identifying AFL mechanisms in an automatic fashion. Further investigations are needed to assess the reliability of the method in different clinical scenarios.
The human heart is a masterpiece of the highest complexity coordinating multi-physics aspects on a multi-scale range. Thus, modeling the cardiac function to reproduce physiological characteristics and diseases remains challenging. Especially the complex simulation of the blood's hemodynamics and its interaction with the myocardial tissue requires a high accuracy of the underlying computational models and solvers. These demanding aspects make whole-heart fully-coupled simulations computationally highly expensive and call for simpler but still accurate models. While the mechanical deformation during the heart cycle drives the blood flow, less is known about the feedback of the blood flow onto the myocardial tissue. To solve the fluid-structure interaction problem, we suggest a cycle-to-cycle coupling of the structural deformation and the fluid dynamics. In a first step, the displacement of the endocardial wall in the mechanical simulation serves as a unidirectional boundary condition for the fluid simulation. After a complete heart cycle of fluid simulation, a spatially resolved pressure factor (PF) is extracted and returned to the next iteration of the solid mechanical simulation, closing the loop of the iterative coupling procedure. All simulations were performed on an individualized whole heart geometry. The effect of the sequential coupling was assessed by global measures such as the change in deformation and-as an example of diagnostically relevant information-the particle residence time. The mechanical displacement was up to 2 mm after the first iteration. In the second iteration, the deviation was in the sub-millimeter range, implying that already one iteration of the proposed cycle-to-cycle coupling is sufficient to converge to a coupled limit cycle. Cycle-to-cycle coupling between cardiac mechanics and fluid dynamics can be a promising approach to account for fluid-structure interaction with low computational effort. In an individualized healthy whole-heart model, one iteration sufficed to obtain converged and physiologically plausible results.
Electrical impedance tomography is clinically used to trace ventilation related changes in electrical conductivity of lung tissue. Estimating regional pulmonary perfusion using electrical impedance tomography is still a matter of research. To support clinical decision making, reliable bedside information of pulmonary perfusion is needed. We introduce a method to robustly detect pulmonary perfusion based on indicator-enhanced electrical impedance tomography and validate it by dynamic multidetector computed tomography in two experimental models of acute respiratory distress syndrome. The acute injury was induced in a sublobar segment of the right lung by saline lavage or endotoxin instillation in eight anesthetized mechanically ventilated pigs. For electrical impedance tomography measurements, a conductive bolus (10% saline solution) was injected into the right ventricle during breath hold. Electrical impedance tomography perfusion images were reconstructed by linear and normalized Gauss-Newton reconstruction on a finite element mesh with subsequent element-wise signal and feature analysis. An iodinated contrast agent was used to compute pulmonary blood flow via dynamic multidetector computed tomography. Spatial perfusion was estimated based on first-pass indicator dilution for both electrical impedance and multidetector computed tomography and compared by Pearson correlation and Bland-Altman analysis. Strong correlation was found in dorsoventral (r = 0.92) and in right-to-left directions (r = 0.85) with good limits of agreement of 8.74% in eight lung segments. With a robust electrical impedance tomography perfusion estimation method, we found strong agreement between multidetector computed and electrical impedance tomography perfusion in healthy and regionally injured lungs and demonstrated feasibility of electrical impedance tomography perfusion imaging.
The electrocardiogram (ECG) is a standard cost-efficient and non-invasive tool for the early detection of various cardiac diseases. Quantifying different timing and amplitude features of and in between the single ECG waveforms can reveal important information about the underlying (dys-)function of the heart. Determining these features requires the detection of fiducial points that mark the on- and offset as well as the peak of each ECG waveform (P wave, QRS complex, T wave). Manually setting these points is time-consuming and requires a physician’s expert knowledge. Therefore, the highly modular ECGdeli toolbox for MATLAB was developed, which is capable of filtering clinically recorded 12-lead ECG signals and detecting the fiducial points, also called delineation. It is one of the few open toolboxes offering ECG delineation for P waves, T Waves and QRS complexes. The algorithms provided were evaluated with the QT database, an ECG database comprising 105 signals with fiducial points annotated by clinicians. The median difference between the fiducial points set by the boundary detection algorithm and the clinical annotations serving as a ground truth is less than 4 samples (16 ms) for the P wave and the QRS complex markers.
Computer modeling of the electrophysiology of the heart has undergone significant progress. A healthy heart can be modeled starting from the ion channels via the spread of a depolarization wave on a realistic geometry of the human heart up to the potentials on the body surface and the ECG. Research is advancing regarding modeling diseases of the heart. This article reviews progress in calculating and analyzing the corresponding electrocardiogram (ECG) from simulated depolarization and repolarization waves. First, we describe modeling of the P-wave, the QRS complex and the T-wave of a healthy heart. Then, both the modeling and the corresponding ECGs of several important diseases and arrhythmias are delineated: ischemia and infarction, ectopic beats and extrasystoles, ventricular tachycardia, bundle branch blocks, atrial tachycardia, flutter and fibrillation, genetic diseases and channelopathies, imbalance of electrolytes and drug-induced changes. Finally, we outline the potential impact of computer modeling on ECG interpretation. Computer modeling can contribute to a better comprehension of the relation between features in the ECG and the underlying cardiac condition and disease. It can pave the way for a quantitative analysis of the ECG and can support the cardiologist in identifying events or non-invasively localizing diseased areas. Finally, it can deliver very large databases of reliably labeled ECGs as training data for machine learning.
C. Nagel, S. Schuler, O. Dössel, and A. Loewe. A bi-atrial statistical shape model for large-scale in silico studies of human atria: model development and application to ECG simulations. In Medical Image Analysis, vol. 74, pp. 102210, 2021
Large-scale electrophysiological simulations to obtain electrocardiograms (ECG) carry the potential to pro- duce extensive datasets for training of machine learning classifiers to, e.g., discriminate between different cardiac pathologies. The adoption of simulations for these purposes is limited due to a lack of ready-to- use models covering atrial anatomical variability. We built a bi-atrial statistical shape model (SSM) of the endocardial wall based on 47 segmented human CT and MRI datasets using Gaussian process morphable models. Generalization, specificity, and compact- ness metrics were evaluated. The SSM was applied to simulate atrial ECGs in 100 random volumetric instances. The first eigenmode of our SSM reflects a change of the total volume of both atria, the second the asym- metry between left vs. right atrial volume, the third a change in the prominence of the atrial appendages. The SSM is capable of generalizing well to unseen geometries and 95% of the total shape variance is cov- ered by its first 24 eigenvectors. The P waves in the 12-lead ECG of 100 random instances showed a duration of 109 . 7 ±12 . 2 ms in accordance with large cohort studies. The novel bi-atrial SSM itself as well as 100 exemplary instances with rule-based augmentation of atrial wall thickness, fiber orientation, inter-atrial bridges and tags for anatomical structures have been made publicly available. This novel, openly available bi-atrial SSM can in future be employed to generate large sets of realistic atrial geometries as a basis for in silico big data approaches.
Background In acute respiratory distress syndrome (ARDS), non-ventilated perfused regions coexist with non-perfused ventilated regions within lungs. The number of unmatched regions might reflect ARDS severity and affect the risk of ventilation-induced lung injury. Despite pathophysiological relevance, unmatched ventilation and perfusion are not routinely assessed at the bedside. The aims of this study were to quantify unmatched ventilation and perfusion at the bedside by electrical impedance tomography (EIT) investigating their association with mortality in patients with ARDS and to explore the effects of positive end-expiratory pressure (PEEP) on unmatched ventilation and perfusion in subgroups of patients with different ARDS severity based on PaO2/FiO(2) and compliance. Methods Prospective observational study in 50 patients with mild (36%), moderate (46%), and severe (18%) ARDS under clinical ventilation settings. EIT was applied to measure the regional distribution of ventilation and perfusion using central venous bolus of saline 5% during end-inspiratory pause. We defined unmatched units as the percentage of only ventilated units plus the percentage of only perfused units. Results Percentage of unmatched units was significantly higher in non-survivors compared to survivors (32[27-47]% vs. 21[17-27]%, p < 0.001). Percentage of unmatched units was an independent predictor of mortality (OR 1.22, 95% CI 1.07-1.39, p = 0.004) with an area under the ROC curve of 0.88 (95% CI 0.79-0.97, p < 0.001). The percentage of ventilation to the ventral region of the lung was higher than the percentage of ventilation to the dorsal region (32 [27-38]% vs. 18 [13-21]%, p < 0.001), while the opposite was true for perfusion (28 [22-38]% vs. 36 [32-44]%, p < 0.001). Higher percentage of only perfused units was correlated with lower dorsal ventilation (r = - 0.486, p < 0.001) and with lower PaO2/FiO(2) ratio (r = - 0.293, p = 0.039). Conclusions EIT allows bedside assessment of unmatched ventilation and perfusion in mechanically ventilated patients with ARDS. Measurement of unmatched units could identify patients at higher risk of death and could guide personalized treatment.
Cranio-maxillofacial surgery often alters the aesthetics of the face which can be a heavy burden for patients to decide whether or not to undergo surgery. Today, physicians can predict the post-operative face using surgery planning tools to support the patient’s decision-making. While these planning tools allow a simulation of the post-operative face, the facial texture must usually be captured by another 3D texture scan and subsequently mapped on the simulated face. This approach often results in face predictions that do not appear realistic or lively looking and are therefore ill-suited to guide the patient’s decision-making. Instead, we propose a method using a generative adversarial network to modify a facial image according to a 3D soft-tissue estimation of the post-operative face. To circumvent the lack of available data pairs between pre- and post-operative measurements we propose a semi-supervised training strategy using cycle losses that only requires paired open-source data of images and 3D surfaces of the face’s shape. After training on “in-the-wild” images we show that our model can realistically manipulate local regions of a face in a 2D image based on a modified 3D shape. We then test our model on four clinical examples where we predict the post-operative face according to a 3D soft-tissue prediction of surgery outcome, which was simulated by a surgery planning tool. As a result, we aim to demonstrate the potential of our approach to predict realistic post-operative images of faces without the need of paired clinical data, physical models, or 3D texture scans.
J. Sánchez, B. Trenor, J. Saiz, O. Dössel, and A. Loewe. Fibrotic Remodeling during Persistent Atrial Fibrillation: In Silico Investigation of the Role of Calcium for Human Atrial Myofibroblast Electrophysiology. In Cells, vol. 10(11) , pp. 2852, 2021
During atrial fibrillation, cardiac tissue undergoes different remodeling processes at different scales from the molecular level to the tissue level. One central player that contributes to both electrical and structural remodeling is the myofibroblast. Based on recent experimental evidence on myofibroblasts’ ability to contract, we extended a biophysical myofibroblast model with Ca2+ handling components and studied the effect on cellular and tissue electrophysiology. Using genetic algorithms, we fitted the myofibroblast model parameters to the existing in vitro data. In silico experiments showed that Ca2+ currents can explain the experimentally observed variability regarding the myofibroblast resting membrane potential. The presence of an L-type Ca2+ current can trigger automaticity in the myofibroblast with a cycle length of 799.9 ms. Myocyte action potentials were prolonged when coupled to myofibroblasts with Ca2+ handling machinery. Different spatial myofibroblast distribution patterns increased the vulnerable window to induce arrhythmia from 12 ms in non-fibrotic tissue to 22 ± 2.5 ms and altered the reentry dynamics. Our findings suggest that Ca2+ handling can considerably affect myofibroblast electrophysiology and alter the electrical propagation in atrial tissue composed of myocytes coupled with myofibroblasts. These findings can inform experimental validation experiments to further elucidate the role of myofibroblast Ca2+ handling in atrial arrhythmogenesis.
This contribution is part of a project concerning the creation of an artificial dataset comprising 3D head scans of craniosynostosis patients for a deep-learning-based classification. To conform to real data, both head and neck are required in the 3D scans. However, during patient recording, the neck is often covered by medical staff. Simply pasting an arbitrary neck leaves large gaps in the 3D mesh. We therefore use a publicly available statistical shape model (SSM) for neck reconstruction. However, most SSMs of the head are constructed using healthy subjects, so the full head reconstruction loses the craniosynostosis-specific head shape. We propose a method to recover the neck while keeping the pathological head shape intact. We propose a Laplace- Beltrami-based refinement step to deform the posterior mean shape of the full head model towards the pathological head. The artificial neck is created using the publicly available Liverpool-Y ork-Model. W e apply our method to construct artificial necks for head scans of 50 scaphocephaly patients. Our method reduces mean vertex correspondence error by approximately 1.3 mm compared to the ordinary posterior mean shape, preserves the pathological head shape, and creates a continuous transition between neck and head. The presented method showed good results for reconstructing a plausible neck to craniosynostosis patients. Easily generalized it might also be applicable to other pathological shapes.
Book Chapters (5)
J. Steyer, L. P. M. Diaz, L. A. Unger, and A. Loewe. Simulated Excitation Patterns in the Atria and Their Corresponding Electrograms. In Functional Imaging and Modeling of the Heart, Springer Nature Switzerland, Cham, pp. 204-212, 2023
UNLABELLED: Cases of vaccine breakthrough, especially in variants of concern (VOCs) infections, are emerging in coronavirus disease (COVID-19). Due to mutations of structural proteins (SPs) (e.g., Spike proteins), increased transmissibility and risk of escaping from vaccine-induced immunity have been reported amongst the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Remdesivir was the first to be granted emergency use authorization but showed little impact on survival in patients with severe COVID-19. Remdesivir is a prodrug of the nucleoside analogue GS-441524 which is converted into the active nucleotide triphosphate to disrupt viral genome of the conserved non-structural proteins (NSPs) and thus block viral replication. GS-441524 exerts a number of pharmacological advantages over Remdesivir: (1) it needs fewer conversions for bioactivation to nucleotide triphosphate; (2) it requires only nucleoside kinase, while Remdesivir requires several hepato-renal enzymes, for bioactivation; (3) it is a smaller molecule and has a potency for aerosol and oral administration; (4) it is less toxic allowing higher pulmonary concentrations; (5) it is easier to be synthesized. The current article will focus on the discussion of interactions between GS-441524 and NSPs of VOCs to suggest potential application of GS-441524 in breakthrough SARS-CoV-2 infections. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s44231-022-00021-4.
A. Loewe, G. Luongo, and J. Sánchez. Machine Learning for Clinical Electrophysiology. In Innovative Treatment Strategies for Clinical Electrophysiology, Springer Nature Singapore, Singapore, pp. 93-109, 2022