The term "In Silico Trial" indicates the use of computer modelling and simulation to evaluate the safety and efficacy of a medical product, whether a drug, a medical device, a diagnostic product or an advanced therapy medicinal product. Predictive models are positioned as new methodologies for the development and the regulatory evaluation of medical products. New methodologies are qualified by regulators such as FDA and EMA through formal processes, where a first step is the definition of the Context of Use (CoU), which is a concise description of how the new methodology is intended to be used in the development and regulatory assessment process. As In Silico Trials are a disruptively innovative class of new methodologies, it is important to have a list of possible CoUs highlighting potential applications for the development of the relative regulatory science. This review paper presents the result of a consensus process that took place in the InSilicoWorld Community of Practice, an online forum for experts in in silico medicine. The experts involved identified 46 descriptions of possible CoUs which were organised into a candidate taxonomy of nine CoU categories. Examples of 31 CoUs were identified in the available literature; the remaining 15 should, for now, be considered speculative.
C. Nagel, S. Schuler, O. Dössel, and A. Loewe. A bi-atrial statistical shape model for large-scale in silico studies of human atria: model development and application to ECG simulations. In arXiv, 2021
Large-scale electrophysiological simulations to obtain electrocardiograms (ECG) carry the potential to produce extensive datasets for training of machine learning classifiers to, e.g., discriminate between different cardiac pathologies. The adoption of simulations for these purposes is limited due to a lack of ready-to-use models covering atrial anatomical variability. We built a bi-atrial statistical shape model (SSM) of the endocardial wall based on 47 segmented human CT and MRI datasets using Gaussian process morphable models. Generalization, specificity, and compactness metrics were evaluated. The SSM was applied to simulate atrial ECGs in 100 random volumetric instances. The first eigenmode of our SSM reflects a change of the total volume of both atria, the second the asymmetry between left vs. right atrial volume, the third a change in the prominence of the atrial appendages. The SSM is capable of generalizing well to unseen geometries and 95% of the total shape variance is covered by its first 23 eigenvectors. The P waves in the 12-lead ECG of 100 random instances showed a duration of 104ms in accordance with large cohort studies. The novel bi-atrial SSM itself as well as 100 exemplary instances with rule-based augmentation of atrial wall thickness, fiber orientation, inter-atrial bridges and tags for anatomical structures have been made publicly available. The novel, openly available bi-atrial SSM can in future be employed to generate large sets of realistic atrial geometries as a basis for in silico big data approaches.
The contraction of the human heart is a complex process as a consequence of the interaction of internal and external forces. In current clinical routine, the resulting deformation can be imaged during an entire heart cycle. However, the active tension development cannot be measured in vivo but may provide valuable diagnostic information. In this work, we present a novel numerical method for solving an inverse problem of cardiac biomechanics - estimating the dynamic active tension field, provided the motion of the myocardial wall is known. This ill-posed non-linear problem is solved using second order Tikhonov regularization in space and time. We conducted a sensitivity analysis by varying the fiber orientation in the range of measurement accuracy. To achieve RMSE <20% of the maximal tension, the fiber orientation needs to be provided with an accuracy of 10°. Also, variation was added to the deformation data in the range of segmentation accuracy. Here, imposing temporal regularization led to an eightfold decrease in the error down to 12%. Furthermore, non-contracting regions representing myocardial infarct scars were introduced in the left ventricle and could be identified accurately in the inverse solution (sensitivity >0.95). The results obtained with non-matching input data are promising and indicate directions for further improvement of the method. In future, this method will be extended to estimate the active tension field based on motion data from clinical images, which could provide important insights in terms of a new diagnostic tool for the identification and treatment of diseased heart tissue. This article is protected by copyright. All rights reserved.
Ventricular coordinates are widely used as a versatile tool for various applications that benefit from a description of local position within the heart. However, the practical usefulness of ventricular coordinates is determined by their ability to meet application-specific requirements. For regression-based estimation of biventricular position, for example, a consistent definition of coordinate directions in both ventricles is important. For the transfer of data between different hearts as another use case, the coordinate values are required to be consistent across different geometries. Existing ventricular coordinate systems do not meet these requirements. We first compare different approaches to compute coordinates and then present Cobiveco, a consistent and intuitive biventricular coordinate system to overcome these drawbacks. A novel one-way mapping error is introduced to assess the consistency of the coordinates. Evaluation of mapping and linearity errors on 36 patient geometries showed a more than 4-fold improvement compared to a state-of-the-art method. Finally, we show two application examples underlining the relevance for cardiac data processing. Cobiveco MATLAB code is available under a permissive open-source license.
L. Azzolin, S. Schuler, O. Dössel, and A. Loewe. A Reproducible Protocol to Assess Arrhythmia Vulnerability : Pacing at the End of the Effective Refractory Period.. In Frontiers in physiology, vol. 12, pp. 656411, 2021
In both clinical and computational studies, different pacing protocols are used to induce arrhythmia and non-inducibility is often considered as the endpoint of treatment. The need for a standardized methodology is urgent since the choice of the protocol used to induce arrhythmia could lead to contrasting results, e.g., in assessing atrial fibrillation (AF) vulnerabilty. Therefore, we propose a novel method-pacing at the end of the effective refractory period (PEERP)-and compare it to state-of-the-art protocols, such as phase singularity distribution (PSD) and rapid pacing (RP) in a computational study. All methods were tested by pacing from evenly distributed endocardial points at 1 cm inter-point distance in two bi-atrial geometries. Seven different atrial models were implemented: five cases without specific AF-induced remodeling but with decreasing global conduction velocity and two persistent AF cases with an increasing amount of fibrosis resembling different substrate remodeling stages. Compared with PSD and RP, PEERP induced a larger variety of arrhythmia complexity requiring, on average, only 2.7 extra-stimuli and 3 s of simulation time to initiate reentry. Moreover, PEERP and PSD were the protocols which unveiled a larger number of areas vulnerable to sustain stable long living reentries compared to RP. Finally, PEERP can foster standardization and reproducibility, since, in contrast to the other protocols, it is a parameter-free method. Furthermore, we discuss its clinical applicability. We conclude that the choice of the inducing protocol has an influence on both initiation and maintenance of AF and we propose and provide PEERP as a reproducible method to assess arrhythmia vulnerability.
OBJECTIVE: Atrial flutter (AFl) is a common arrhythmia that can be categorized according to different self-sustained electrophysiological mechanisms. The non-invasive discrimination of such mechanisms would greatly benefit ablative methods for AFl therapy as the driving mechanisms would be described prior to the invasive procedure, helping to guide ablation. In the present work, we sought to implement recurrence quantification analysis (RQA) on 12-lead ECG signals from a computational framework to discriminate different electrophysiological mechanisms sustaining AFl. METHODS: 20 different AFl mechanisms were generated in 8 atrial models and were propagated into 8 torso models via forward solution, resulting in 1,256 sets of 12-lead ECG signals. Principal component analysis was applied on the 12-lead ECGs, and six RQA-based features were extracted from the most significant principal component scores in two different approaches: individual component RQA and spatial reduced RQA. RESULTS: In both approaches, RQA-based features were significantly sensitive to the dynamic structures underlying different AFl mechanisms. Hit rate as high as 67.7% was achieved when discriminating the 20 AFl mechanisms. RQA-based features estimated for a clinical sample suggested high agreement with the results found in the computational framework. CONCLUSION: RQA has been shown an effective method to distinguish different AFl electrophysiological mechanisms in a non-invasive computational framework. A clinical 12-lead ECG used as proof of concept showed the value of both the simulations and the methods. SIGNIFICANCE: The non-invasive discrimination of AFl mechanisms helps to delineate the ablation strategy, reducing time and resources required to conduct invasive cardiac mapping and ablation procedures.
Acute ischemic stroke is a major health problem with a high mortality rate and a high risk for permanent disabilities. Selective brain hypothermia has the neuroprotective potential to possibly lower cerebral harm. A recently developed catheter system enables to combine endovascular blood cooling and thrombectomy using the same endovascular access. By using the penumbral perfusion via leptomeningeal collaterals, the catheter aims at enabling a cold reperfusion, which mitigates the risk of a reperfusion injury. However, cerebral circulation is highly patient-specific and can vary greatly. Since direct measurement of remaining perfusion and temperature decrease induced by the catheter is not possible without additional harm to the patient, computational modeling provides an alternative to gain knowledge about resulting cerebral temperature decrease. In this work, we present a brain temperature model with a realistic division into gray and white matter and consideration of spatially resolved perfusion. Furthermore, it includes detailed anatomy of cerebral circulation with possibility of personalizing on base of real patient anatomy. For evaluation of catheter performance in terms of cold reperfusion and to analyze its general performance, we calculated the decrease in brain temperature in case of a large vessel occlusion in the middle cerebral artery (MCA) for different scenarios of cerebral arterial anatomy. Congenital arterial variations in the circle of Willis had a distinct influence on the cooling effect and the resulting spatial temperature distribution before vessel recanalization. Independent of the branching configurations, the model predicted a cold reperfusion due to a strong temperature decrease after recanalization (1.4-2.2 C after 25 min of cooling, recanalization after 20 min of cooling). Our model illustrates the effectiveness of endovascular cooling in combination with mechanical thrombectomy and its results serve as an adequate substitute for temperature measurement in a clinical setting in the absence of direct intraparenchymal temperature probes.
C. Nagel, G. Luongo, L. Azzolin, S. Schuler, O. Dössel, and A. Loewe. Non-Invasive and Quantitative Estimation of Left Atrial Fibrosis Based on P Waves of the 12-Lead ECG—A Large-Scale Computational Study Covering Anatomical Variability. In Journal of Clinical Medicine, vol. 10(8) , pp. 1797, 2021
Electrical impedance tomography is clinically used to trace ventilation related changes in electrical conductivity of lung tissue. Estimating regional pulmonary perfusion using electrical impedance tomography is still a matter of research. To support clinical decision making, reliable bedside information of pulmonary perfusion is needed. We introduce a method to robustly detect pulmonary perfusion based on indicator-enhanced electrical impedance tomography and validate it by dynamic multidetector computed tomography in two experimental models of acute respiratory distress syndrome. The acute injury was induced in a sublobar segment of the right lung by saline lavage or endotoxin instillation in eight anesthetized mechanically ventilated pigs. For electrical impedance tomography measurements, a conductive bolus (10% saline solution) was injected into the right ventricle during breath hold. Electrical impedance tomography perfusion images were reconstructed by linear and normalized Gauss-Newton reconstruction on a finite element mesh with subsequent element-wise signal and feature analysis. An iodinated contrast agent was used to compute pulmonary blood flow via dynamic multidetector computed tomography. Spatial perfusion was estimated based on first-pass indicator dilution for both electrical impedance and multidetector computed tomography and compared by Pearson correlation and Bland-Altman analysis. Strong correlation was found in dorsoventral (r = 0.92) and in right-to-left directions (r = 0.85) with good limits of agreement of 8.74% in eight lung segments. With a robust electrical impedance tomography perfusion estimation method, we found strong agreement between multidetector computed and electrical impedance tomography perfusion in healthy and regionally injured lungs and demonstrated feasibility of electrical impedance tomography perfusion imaging.
Diseases caused by alterations of ionic concentrations are frequently observed challenges and play an important role in clinical practice. The clinically established method for the diagnosis of electrolyte concentration imbalance is blood tests. A rapid and non-invasive point-of-care method is yet needed. The electrocardiogram (ECG) could meet this need and becomes an established diagnostic tool allowing home monitoring of the electrolyte concentration also by wearable devices. In this review, we present the current state of potassium and calcium concentration monitoring using the ECG and summarize results from previous work. Selected clinical studies are presented, supporting or questioning the use of the ECG for the monitoring of electrolyte concentration imbalances. Differences in the findings from automatic monitoring studies are discussed, and current studies utilizing machine learning are presented demonstrating the potential of the deep learning approach. Furthermore, we demonstrate the potential of computational modeling approaches to gain insight into the mechanisms of relevant clinical findings and as a tool to obtain synthetic data for methodical improvements in monitoring approaches.
Identification of atrial sites that perpetuate atrial fibrillation (AF), and ablation thereof terminates AF, is challenging. We hypothesized that specific electrogram (EGM) characteristics identify AF-termination sites (AFTS). Twenty-one patients in whom low-voltage-guided ablation after pulmonary vein isolation terminated clinical persistent AF were included. Patients were included if short RF-delivery for <8sec at a given atrial site was associated with acute termination of clinical persistent AF. EGM-characteristics at 21 AFTS, 105 targeted sites without termination and 105 non-targeted control sites were analyzed. Alteration of EGM-characteristics by local fibrosis was evaluated in a three-dimensional high resolution (100 µm)-computational AF model. AFTS demonstrated lower EGM-voltage, higher EGM-cycle-length-coverage, shorter AF-cycle-length and higher pattern consistency than control sites (0.49 ± 0.39 mV vs. 0.83 ± 0.76 mV, p < 0.0001; 79 ± 16% vs. 59 ± 22%, p = 0.0022; 173 ± 49 ms vs. 198 ± 34 ms, p = 0.047; 80% vs. 30%, p < 0.01). Among targeted sites, AFTS had higher EGM-cycle-length coverage, shorter local AF-cycle-length and higher pattern consistency than targeted sites without AF-termination (79 ± 16% vs. 63 ± 23%, p = 0.02; 173 ± 49 ms vs. 210 ± 44 ms, p = 0.002; 80% vs. 40%, p = 0.01). Low voltage (0.52 ± 0.3 mV) fractionated EGMs (79 ± 24 ms) with delayed components in sinus rhythm ('atrial late potentials', respectively 'ALP') were observed at 71% of AFTS. EGMs recorded from fibrotic areas in computational models demonstrated comparable EGM-characteristics both in simulated AF and sinus rhythm. AFTS may therefore be identified by locally consistent, fractionated low-voltage EGMs with high cycle-length-coverage and rapid activity in AF, with low-voltage, fractionated EGMs with delayed components/ 'atrial late potentials' (ALP) persisting in sinus rhythm.
The solution of the inverse problem of electrocardiology allows the reconstruction of the spatial distribution of the electrical activity of the heart from the body surface electrocardiogram (electrocardiographic imaging, ECGI). ECGI using the equivalent dipole layer (EDL) model has shown to be accurate for cardiac activation times. However, validation of this method to determine repolarization times is lacking. In the present study, we determined the accuracy of the EDL model in reconstructing cardiac repolarization times, and assessed the robustness of the method under less ideal conditions (addition of noise and errors in tissue conductivity). A monodomain model was used to determine the transmembrane potentials in three different excitation-repolarization patterns (sinus beat and ventricular ectopic beats) as the gold standard. These were used to calculate the body surface ECGs using a finite element model. The resulting body surface electrograms (ECGs) were used as input for the EDL-based inverse reconstruction of repolarization times. The reconstructed repolarization times correlated well (COR > 0.85) with the gold standard, with almost no decrease in correlation after adding errors in tissue conductivity of the model or noise to the body surface ECG. Therefore, ECGI using the EDL model allows adequate reconstruction of cardiac repolarization times. Graphical abstract Validation of electrocardiographic imaging for repolarization using forward calculated body surface ECGs from simulated activation-repolarization sequences.
BACKGROUND: Using cardiovascular magnetic resonance imaging (CMR), it is possible to detect diffuse fibrosis of the left ventricle (LV) in patients with atrial fibrillation (AF), which may be independently associated with recurrence of AF after ablation. By conducting CMR, clinical, electrophysiology and biomarker assessment we planned to investigate LV myocardial fibrosis in patients undergoing AF ablation. METHODS: LV fibrosis was assessed by T1 mapping in 31 patients undergoing percutaneous ablation for AF. Galectin-3, coronary sinus type I collagen C terminal telopeptide (ICTP), and type III procollagen N terminal peptide were measured with ELISA. Comparison was made between groups above and below the median for LV extracellular volume fraction (ECV), followed by regression analysis. RESULTS: On linear regression analysis LV ECV had significant associations with invasive left atrial pressure (Beta 0.49, P = 0.008) and coronary sinus ICTP (Beta 0.75, P < 0.001), which remained significant on multivariable regression. CONCLUSION: LV fibrosis in patients with AF is associated with left atrial pressure and invasively measured levels of ICTP turnover biomarker.
Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.
L. Azzolin, L. Dedè, A. Gerbi, and A. Quarteroni. Effect of fibre orientation and bulk modulus on the electromechanical modelling of human ventricles. In Mathematics in Engineering, vol. 2(4) , pp. 614-638, 2020
This work concerns the mathematical and numerical modeling of the heart. The aim is to enhance the understanding of the cardiac function in both physiological and pathological conditions. Along this road, a challenge arises from the multi-scale and multi-physics nature of the mathematical problem at hand. In this paper, we propose an electromechanical model that, in bi-ventricle geometries, combines the monodomain equation, the Bueno-Orovio minimal ionic model, and the Holzapfel-Ogden strain energy function for the passive myocardial tissue modelling together with the active strain approach combined with a model for the transmurally heterogeneous thickening of the myocardium. Since the distribution of the electric signal is dependent on the fibres orientation of the ventricles, we use a Laplace-Dirichlet Rule-Based algorithm to determine the myocardial fibres and sheets configuration in the whole bi-ventricle. In this paper, we study the influence of different fibre directions and incompressibility constraint and penalization on the compressibility of the material (bulk modulus) on the pressure-volume relation simulating a full heart beat. The coupled electromechanical problem is addressed by means of a fully segregated scheme. The numerical discretization is based on the Finite Element Method for the spatial discretization and on Backward Differentiation Formulas for the time discretization. The arising non-linear algebraic system coming from application of the implicit scheme is solved through the Newton method. Numerical simulations are carried out in a patient-specific biventricle geometry to highlight the most relevant results of both electrophysiology and mechanics and to compare them with physiological data and measurements. We show how various fibre configurations and bulk modulus modify relevant clinical quantities such as stroke volume, ejection fraction and ventricle contractility.
The electrophysiological mechanism of the sinus node automaticity was previously considered exclusively regulated by the so-called "funny current". However, parallel investigations increasingly emphasized the importance of the Ca-homeostasis and Na/Ca exchanger (NCX). Recently, increasing experimental evidence, as well as insight through mechanistic modeling demonstrates the crucial role of the exchanger in sinus node pacemaking. NCX had a key role in the exciting story of discovery of sinus node pacemaking mechanisms, which recently settled with a consensus on the coupled-clock mechanism after decades of debate. This review focuses on the role of the Na/Ca exchanger from the early results and concepts to recent advances and attempts to give a balanced summary of the characteristics of the local, spontaneous, and rhythmic Ca releases, the molecular control of the NCX and its role in the fight-or-flight response. Transgenic animal models and pharmacological manipulation of intracellular Ca concentration and/or NCX demonstrate the pivotal function of the exchanger in sinus node automaticity. We also highlight where specific hypotheses regarding NCX function have been derived from computational modeling and require experimental validation. Nonselectivity of NCX inhibitors and the complex interplay of processes involved in Ca handling render the design and interpretation of these experiments challenging.
A. Naber, D. Berwanger, and W. Nahm. Geodesic Length Measurement in Medical Images: Effect of the Discretization by the Camera Chip and Quantitative Assessment of Error Reduction Methods. In Photonics, vol. 7(3) , pp. 1-16, 2020
Afterinterventionssuchasbypasssurgeriesthevascularfunctionischeckedqualitatively and remotely by observing the blood dynamics inside the vessel via Fluorescence Angiography. This state-of-the-art method has to be improved by introducing a quantitatively measured blood flow. Previous approaches show that the measured blood flow cannot be easily calibrated against a gold standard reference. In order to systematically address the possible sources of error, we investigated the error in geodesic length measurement caused by spatial discretization on the camera chip. We used an in-silico vessel segmentation model based on mathematical functions as a ground truth for the length of vessel-like anatomical structures in the continuous space. Discretization errors for the chosen models were determined in a typical magnitude of 6%. Since this length error would propagate to an unacceptable error in blood flow measurement, counteractions need to be developed. Therefore, different methods for the centerline extraction and spatial interpolation have been tested and compared against their performance in reducing the discretization error in length measurement by re-continualization. In conclusion, the discretization error is reduced by the re-continualization of the centerline to an acceptable range. The discretization error is dependent on the complexity of the centerline and this dependency is also reduced. Thereby the centerline extraction by erosion in combination with the piecewise Bézier curve fitting performs best by reducing the error to 2.7% with an acceptable computational time.
S. Pollnow, G. Schwaderlapp, A. Loewe, and O. Dössel. Monitoring the dynamics of acute radiofrequency ablation lesion formation in thin-walled atria – a simultaneous optical and electrical mapping study. In Biomedical Engineering / Biomedizinische Technik, vol. 65(3) , pp. 327-341, 2020
Background Radiofrequency ablation (RFA) is a common approach to treat cardiac arrhythmias. During this intervention, numerous strategies are applied to indirectly estimate lesion formation. However, the assessment of the spatial extent of these acute injuries needs to be improved in order to create well-defined and durable ablation lesions. Methods We investigated the electrophysiological characteristics of rat atrial myocardium during an ex vivo RFA procedure with fluorescence-optical and electrical mapping. By analyzing optical data, the temporal growth of punctiform ablation lesions was reconstructed after stepwise RFA sequences. Unipolar electrograms (EGMs) were simultaneously recorded by a multielectrode array (MEA) before and after each RFA sequence. Based on the optical results, we searched for electrical features to delineate these lesions from healthy myocardium. Results Several unipolar EGM parameters were monotonically decreasing when distances between the electrode and lesion boundary were smaller than 2 mm. The negative component of the unipolar EGM [negative peak amplitude (Aneg)] vanished for distances lesser than 0.4 mm to the lesion boundary. Median peak-to-peak amplitude (Vpp) was decreased by 75% compared to baseline. Conclusion Aneg and Vpp are excellent parameters to discriminate the growing lesion area from healthy myocardium. The experimental setup opens new opportunities to investigate EGM characteristics of more complex ablation lesions.
Von wandernden Ionen über das Zellgewebe bis hin zur Aufzeichnung eines EKG: Die Softwaresimulation des menschlichen Herzens ermöglicht maßgeschneiderte Therapien. Am Karlsruher Institut für Technologie (KIT) haben Forscher ein Computermodell des menschlichen Herzens entwickelt. Die Wissenschaftler sind mittlerweile sogar schon so weit, dass sie das Modell auf die individuellen Eigenschaften eines einzelnen Patienten maßschneidern können. Diese Simulation hat einen erheblichen Nutzen für die medizinische Praxis.
O. Dössel. Elektrophysiologie der Atrien - Computermodelle liefern Antworten. In Cardio News, vol. 23(4) , pp. 24-25, 2020
OBJECTIVE: Unipolar intracardiac electrograms (uEGMs) measured inside the atria during electro-anatomic mapping contain diagnostic information about cardiac excitation and tissue properties. The ventricular far field (VFF) caused by ventricular depolarization compromises these signals. Current signal processing techniques require several seconds of local uEGMs to remove the VFF component and thus prolong the clinical mapping procedure. We developed an approach to remove the VFF component using data obtained during initial anatomy acquisition. METHODS: We developed two models which can approximate the spatio-temporal distribution of the VFF component based on acquired EGM data: Polynomial fit, and dipole fit. Both were benchmarked based on simulated cardiac excitation in two models of the human heart and applied to clinical data. RESULTS: VFF data acquired in one atrium were used to estimate model parameters. Under realistic noise conditions, a dipole model approximated the VFF with a median deviation of 0.029mV, yielding a median VFF attenuation of 142. In a different setup, only VFF data acquired at distances of more than 5mm to the atrial endocardium were used to estimate the model parameters. The VFF component was then extrapolated for a layer of 5mm thickness lining the endocardial tissue. A median deviation of 0.082mV (median VFF attenuation of 49x) was achieved under realistic noise conditions. CONCLUSION: It is feasible to model the VFF component in a personalized way and effectively remove it from uEGMs. SIGNIFICANCE: Application of our novel, simple and computationally inexpensive methods allows immediate diagnostic assessment of uEGM data without prolonging data acquisition.
Background and Purpose: The exact mechanism of spontaneous pacemaking is not fully understood. Recent results suggest tight cooperation between intracellular Ca handling and sarcolemmal ion channels. An important player of this crosstalk is the Na/Ca exchanger (NCX), however, direct pharmacological evidence was unavailable so far because of the lack of a selective inhibitor. We investigated the role of the NCX current in pacemaking and analyzed the functional consequences of the I-NCX coupling by applying the novel selective NCX inhibitor ORM-10962 on the sinus node (SAN). Experimental Approach: Currents were measured by patch-clamp, Ca-transients were monitored by fluorescent optical method in rabbit SAN cells. Action potentials (AP) were recorded from rabbit SAN tissue preparations. Mechanistic computational data were obtained using the Yaniv . SAN model. Key Results: ORM-10962 (ORM) marginally reduced the SAN pacemaking cycle length with a marked increase in the diastolic Ca level as well as the transient amplitude. The bradycardic effect of NCX inhibition was augmented when the funny-current (I) was previously inhibited and , the effect of I was augmented when the Ca handling was suppressed. Conclusion and Implications: We confirmed the contribution of the NCX current to cardiac pacemaking using a novel NCX inhibitor. Our experimental and modeling data support a close cooperation between I and NCX providing an important functional consequence: these currents together establish a strong depolarization capacity providing important safety factor for stable pacemaking. Thus, after individual inhibition of I or NCX, excessive bradycardia or instability cannot be expected because each of these currents may compensate for the reduction of the other providing safe and rhythmic SAN pacemaking.
Atrial flutter (AFl) is a common heart rhythm disor- der driven by different self-sustaining electrophysiological atrial mechanisms. In the present work, we sought to dis- criminate which mechanism is sustaining the arrhythmia in an individual patient using non-invasive 12-lead elec- trocardiogram (ECG) signals. Specifically, we analyse the influence of atrial and torso geometries for the success of such discrimination. 2,512 ECG were simulated and 151 features were extracted from the signals. Three clas- sification scenarios were investigated: random set clas- sification; leave-one-atrium-out (LOAO); and leave-one- torso-out (LOTO). A radial basis neural network classifier achieved test accuracies of 89.84%, 88.98%, and 59.82% for the random set classification, LOTO, and LOAO, re- spectively. The most discriminative single feature was the F-wave duration (74% test accuracy). Our results show that a machine learning approach can potentially identify a high number of different AFl mechanisms using the 12- lead ECG. More than the 8 atrial models used in this work should be included during training due to the significant influence that the atrial geometry has on the ECG signals and thus on the resulting classification. This non-invasive classification can help to identify the optimal ablation strategy, reducing time and resources required to conduct invasive cardiac mapping and ablation procedures.
Therapeutic hypothermia (TH) is an approved neuroproctetive treatment to reduce neurological morbidity and mortality after hypoxic-ischemic damage related to cardiac arrest and neonatal asphyxia. Also in the treatment of acute ischemic stroke (AIS), which in Western countries still shows a very high mortality rate of about 25 %, selective mild TH by means of Targeted Temperature Management (TTM) could potentially decrease final infarct volume. In this respect, a novel intracarotid blood cooling catheter system has recently been developed, which allows for combined carotid blood cooling and mechanical thrombectomy (MT) and aims at selective mild TH in the affected ischemic brain (core and penumbra). Unfortunately, so far direct measurement and control of cooled cerebral temperature requires invasive or elaborate MRI-assisted measurements. Computational modeling provides unique opportunities to predict the resulting cerebral temperatures on the other hand. In this work, a simplified 3D brain model was generated and coupled with a 1D hemodynamics model to predict spatio-temporal cerebral temperature profiles using finite element modeling. Cerebral blood and tissue temperatures as well as the systemic temperature were analyzed for physiological conditions as well as for a middle cerebral artery (MCA) M1 occlusion. Furthermore, vessel recanalization and its effect on cerebral temperature was analyzed. The results show a significant influence of collateral flow on the cooling effect and are in accordance with experimental data in animals. Our model predicted a possible neuroprotective temperature decrease of 2.5 ℃ for the territory of MCA perfusion after 60 min of blood cooling, which underlines the potential of the new device and the use of TTM in case of AIS.
C. Nagel, N. Pilia, A. Loewe, and O. Dössel. Quantification of Interpatient 12-lead ECG Variabilities within a Healthy Cohort. In Current Directions in Biomedical Engineering, vol. 6(3) , pp. 493-496, 2020
The morphology of the electrocardiogram (ECG) varies among different healthy subjects due to anatomical and structural reasons, such as for example the shape of the heart geometry or the position and size of surrounding organs in the torso. Knowledge about these ECG morphology changes could be used to parameterize electrophysiological simula- tions of the human heart. In this work, we detected the boundaries of ECG waveforms, i.e. the P-wave, the QRS-complex and the T-wave, in 12- lead ECGs from 918 healthy subjects in the Physionet Com- puting in Cardiology Challenge 2020 Database with the IBT openECG toolbox. Subsequently, we obtained the onset, the peak and the offset of each P-wave, QRS-complex and T-wave in the signal. In this way, the duration of the P-wave, the QRS- complex and the T-wave, the PQ-, RR- and the QT-interval as well as the amplitudes of the P-wave, the Q-, R- and S- peak and the T-wave in each lead were extracted from the 918 healthy ECGs. Their statistical distributions and correlation between each other were assessed. The highest variabilities among the 918 healthy subject were found for the RR interval and the amplitudes of the QRS- complex. The highest correlation was observed for feature pairs that represent the same feature in different leads. Es- pecially the R-peak amplitudes showed a strong correlation across different leads. The calculated feature distributions can be used to optimize the parameters of populations of cardiac electrophysiological models. In this way, realistic in-silico generated surface ECGs can be simulated in large scale and could be used as input data for machine learning algorithms for a classification of cardio- vascular diseases.
End-stage chronic kidney disease (CKD) patients are facing a 30% rise for the risk of lethal cardiac events (LCE) compared to non-CKD patients. At the same time, these patients undergoing dialysis experience shifts in the potassium concentrations. The increased risk of LCE paired with the concentration changes suggest a connection between LCE and concentration disbalances. To prove this link, a continuous monitoring device for the ionic concentrations, e.g. the ECG, is needed. In this work, we want to answer if an optimised signal processing chain can improve the result quantify the influence of a disbalanced training dataset on the final estimation result. The study was performed on a dataset consisting of 12-lead ECGs recorded during dialysis sessions of 32 patients. We selected three features to find a mapping from ECG features to [K+]o: T-wave ascending slope, T-wave descending slope and T-wave amplitude. A polynomial model of 3rd order was used to reconstruct the concentrations from these features. We solved a regularised weighted least squares problem with a weighting matrix dependent on the frequency of each concentration in the dataset (frequent concentration weighted less). By doing so, we tried to generate a model being suitable for the whole range of the concentrations.With weighting, errors are increasing for the whole dataset. For the data partition with [K+]o<5 mmol/l, errors are increasing, for [K+]o≥5 mmol/l, errors are decreasing. However, and apart from the exact reconstruction results, we can conclude that a model being valid for all patients and not only the majority, needs to be learned with a more homogeneous dataset. This can be achieved by leaving out data points or by weighting the errors during the model fitting. With increasing weighting, we increase the performance on the part of the [K+]o that are less frequent which was desired in our case.
This article analyzes the tools and methods used in the analysis of emotions from text for the purpose of managing society. It illustrates the influence of emotions on the management of people, society and businesses processes. Their importance and the changes that occurs over time in the processes of managing society. The role that social networks plays in managing society and the methods they uses. We researched different websites and software, observed their mechanisms for data collection, its analysis and use for the purpose of managing society. In general, we analyze the most common methods, the factors that affects the choice of a particular method, their influence and based on our analy-sis, give recommendations for improving the process of analyzing emotions for the purpose of managing society.
BACKGROUND: Electrical impedance tomography (EIT) with indicator dilution may be clinically useful to measure relative lung perfusion, but there is limited information on the performance of this technique. METHODS: Thirteen pigs (50-66 kg) were anaesthetised and mechanically ventilated. Sequential changes in ventilation were made: (i) right-lung ventilation with left-lung collapse, (ii) two-lung ventilation with optimised PEEP, (iii) two-lung ventilation with zero PEEP after saline lung lavage, (iv) two-lung ventilation with maximum PEEP (20/25 cm HO to achieve peak airway pressure 45 cm HO), and (v) two-lung ventilation under unilateral pulmonary artery occlusion. Relative lung perfusion was assessed with EIT and central venous injection of saline 3%, 5%, and 10% (10 ml) during breath holds. Relative perfusion was determined by positron emission tomography (PET) using Gallium-labelled microspheres. EIT and PET were compared in eight regions of equal ventro-dorsal height (right, left, ventral, mid-ventral, mid-dorsal, and dorsal), and directional changes in regional perfusion were determined. RESULTS: Differences between methods were relatively small (95% of values differed by less than 8.7%, 8.9%, and 9.5% for saline 10%, 5%, and 3%, respectively). Compared with PET, EIT underestimated relative perfusion in dependent, and overestimated it in non-dependent, regions. EIT and PET detected the same direction of change in relative lung perfusion in 68.9-95.9% of measurements. CONCLUSIONS: The agreement between EIT and PET for measuring and tracking changes of relative lung perfusion was satisfactory for clinical purposes. Indicator-based EIT may prove useful for measuring pulmonary perfusion at bedside.
Y. Lutz, A. Loewe, S. Meckel, O. Dössel, and G. Cattaneo. Combined local hypothermia and recanalization therapy for acute ischemic stroke: Estimation of brain and systemic temperature using an energetic numerical model.. In Journal of Thermal Biology, vol. 84, pp. 316-322, 2019
Local brain hypothermia is an attractive method for providing cerebral neuroprotection for ischemic stroke patients and at the same time reducing systemic side effects of cooling. In acute ischemic stroke patients with large vessel occlusion, combination with endovascular mechanical recanalization treatment could potentially allow for an alleviation of inflammatory and apoptotic pathways in the critical phase of reperfusion. The direct cooling of arterial blood by means of an intra-carotid heat exchange catheter compatible with recanalization systems is a novel promising approach. Focusing on the concept of "cold reperfusion", we developed an energetic model to calculate the rate of temperature decrease during intra-carotid cooling in case of physiological as well as decreased perfusion. Additionally, we discussed and considered the effect and biological significance of temperature decrease on resulting brain perfusion. Our model predicted a 2 °C brain temperature decrease in 8.3, 11.8 and 26.2 min at perfusion rates of 50, 30 and 10ml100g⋅min, respectively. The systemic temperature decrease - caused by the venous blood return to the main circulation - was limited to 0.5 °C in 60 min. Our results underline the potential of catheter-assisted, intracarotid blood cooling to provide a fast and selective brain temperature decrease in the phase of vessel recanalization. This method can potentially allow for a tissue hypothermia during the restoration of the physiological flow and thus a "cold reperfusion" in the setting of mechanical recanalization.
M. Hernández Mesa, N. Pilia, O. Dössel, and A. Loewe. Influence of ECG Lead Reduction Techniques for Extracellular Potassium and Calcium Concentration Estimation. In Current Directions in Biomedical Engineering, vol. 5(1) , pp. 69-72, 2019
Chronic kidney disease (CKD) affects 13% of the worldwide population and end stage patients often receive haemodialysis treatment to control the electrolyte concentrations. The cardiovascular death rate increases by 10% - 30% in dialysis patients than in general population. To analyse possible links between electrolyte concentration variation and cardiovascular diseases, a continuous non-invasive monitoring tool enabling the estimation of potassium and calcium concentration from features of the ECG is desired. Although the ECG was shown capable of being used for this purpose, the method still needs improvement. In this study, we examine the influence of lead reduction techniques on the estimation results of serum calcium and potassium concentrations.We used simulated 12 lead ECG signals obtained using an adapted Himeno et al. model. Aiming at a precise estimation of the electrolyte concentrations, we compared the estimation based on standard ECG leads with the estimation using linearly transformed fusion signals. The transformed signals were extracted from two lead reduction techniques: principle component analysis (PCA) and maximum amplitude transformation (Max- Amp). Five features describing the electrolyte changes were calculated from the signals. To reconstruct the ionic concentrations, we applied a first and a third order polynomial regression connecting the calculated features and concentration values. Furthermore, we added 30 dB white Gaussian noise to the ECGs to imitate clinically measured signals. For the noisefree case, the smallest estimation error was achieved with a specific single lead from the standard 12 lead ECG. For example, for a first order polynomial regression, the error was 0.0003±0.0767 mmol/l (mean±standard deviation) for potassium and -0.0036±0.1710 mmol/l for calcium (Wilson lead V1). For the noisy case, the PCA signal showed the best estimation performance with an error of -0.003±0.2005 mmol/l for potassium and -0.0002±0.2040 mmol/l for calcium (both first order fit). Our results show that PCA as ECG lead reduction technique is more robust against noise than MaxAmp and standard ECG leads for ionic concentration reconstruction.
Changes of serum and extracellular ion concentrations occur regularly in patients with chronic kidney disease (CKD). Recently, hypocalcemia, i.e. a decrease of the extra-cellular calcium concentration [Ca2+]o, has been suggested as potential pathomechanism contributing to the unexplained high rate of sudden cardiac death (SCD) in CKD patients. In particular, there is a hypothesis that hypocalcaemia could slow down natural pacemaking in the human sinus node to fatal degrees. Here, we address the question whether there are inter-species differences in the response of cellular sinus node pacemaking to changes of [Ca2+]o. Towards this end, we employ computational models of mouse, rabbit and human sinus node cells. The Fabbri et al. human model was updated to consider changes of intracellular ion concentrations. We identified crucial inter-species differences in the response of cellular pacemaking in the sinus node to changes of [Ca2+]o with little changes of cycle length in mouse and rabbit models (<83 ms) in contrast to a pronounced bradycardic effect in the human model (up to > 1000 ms). Our results suggest that experiments with human sinus node cells are required to investigate the potential mechanism of hypocalcaemia-induced bradycardic SCD in CKD patients and small animal models are not well suited.
Each heartbeat is initiated by cyclic spontaneous depolarization of cardiomyocytes in the sinus node forming the primary natural pacemaker. In patients with end-stage renal disease undergoing hemodialysis, it was recently shown that the heart rate drops to very low values before they suffer from sudden cardiac death with an unexplained high incidence. We hypothesize that the electrolyte changes commonly occurring in these patients affect sinus node beating rate and could be responsible for severe bradycardia. To test this hypothesis, we extended the Fabbri et al. computational model of human sinus node cells to account for the dynamic intracellular balance of ion concentrations. Using this model, we systematically tested the effect of altered extracellular potassium, calcium, and sodium concentrations. Although sodium changes had negligible (0.15 bpm/mM) and potassium changes mild effects (8 bpm/mM), calcium changes markedly affected the beating rate (46 bpm/mM ionized calcium without autonomic control). This pronounced bradycardic effect of hypocalcemia was mediated primarily by I attenuation due to reduced driving force, particularly during late depolarization. This, in turn, caused secondary reduction of calcium concentration in the intracellular compartments and subsequent attenuation of inward I and reduction of intracellular sodium. Our in silico findings are complemented and substantiated by an empirical database study comprising 22,501 pairs of blood samples and in vivo heart rate measurements in hemodialysis patients and healthy individuals. A reduction of extracellular calcium was correlated with a decrease of heartrate by 9.9 bpm/mM total serum calcium (p < 0.001) with intact autonomic control in the cross-sectional population. In conclusion, we present mechanistic in silico and empirical in vivo data supporting the so far neglected but experimentally testable and potentially important mechanism of hypocalcemia-induced bradycardia and asystole, potentially responsible for the highly increased and so far unexplained risk of sudden cardiac death in the hemodialysis patient population.
Atypical atrial flutter (AFlut) is a reentrant arrhythmia which patients frequently develop after ablation for atrial fibrillation (AF). Indeed, substrate modifications during AF ablation can increase the likelihood to develop AFlut and it is clinically not feasible to reliably and sensitively test if a patient is vulnerable to AFlut. Here, we present a novel method based on personalized computational models to identify pathways along which AFlut can be sustained in an individual patient. We build a personalized model of atrial excitation propagation considering the anatomy as well as the spatial distribution of anisotropic conduction velocity and repolarization characteristics based on a combination of a priori knowledge on the population level and information derived from measurements performed in the individual patient. The fast marching scheme is employed to compute activation times for stimuli from all parts of the atria. Potential flutter pathways are then identified by tracing loops from wave front collision sites and constricting them using a geometric snake approach under consideration of the heterogeneous wavelength condition. In this way, all pathways along which AFlut can be sustained are identified. Flutter pathways can be instantiated by using an eikonal-diffusion phase extrapolation approach and a dynamic multifront fast marching simulation. In these dynamic simulations, the initial pattern eventually turns into the one driven by the dominant pathway, which is the only pathway that can be observed clinically. We assessed the sensitivity of the flutter pathway maps with respect to conduction velocity and its anisotropy. Moreover, we demonstrate the application of tailored models considering disease-specific repolarization properties (healthy, AF-remodeled, potassium channel mutations) as well as applicabiltiy on a clinical dataset. Finally, we tested how AFlut vulnerability of these substrates is modulated by exemplary antiarrhythmic drugs (amiodarone, dronedarone). Our novel method allows to assess the vulnerability of an individual patient to develop AFlut based on the personal anatomical, electrophysiological, and pharmacological characteristics. In contrast to clinical electrophysiological studies, our computational approach provides the means to identify all possible AFlut pathways and not just the currently dominant one. This allows to consider all relevant AFlut pathways when tailoring clinical ablation therapy in order to reduce the development and recurrence of AFlut.
O. Dössel. Elektrophysiologie des Vorhofs im Fokus. In Cardio News, vol. 22(11) , pp. 21-22, 2019
Atrial fibrillation (AF) is the most prevalent form of cardiac arrhythmia. The atrial wall thickness (AWT) can potentially improve our understanding of the mechanism underlying atrial structure that drives AF and provides important clinical information. However, most existing studies for estimating AWT rely on ruler-based measurements performed on only a few selected locations in 2D or 3D using digital calipers. Only a few studies have developed automatic approaches to estimate the AWT in the left atrium, and there are currently no methods to robustly estimate the AWT of both atrial chambers. Therefore, we have developed a computational pipeline to automatically calculate the 3D AWT across bi-atrial chambers and extensively validated our pipeline on both ex vivo and in vivo human atria data. The atrial geometry was first obtained by segmenting the atrial wall from the MRIs using a novel machine learning approach. The epicardial and endocardial surfaces were then separated using a multi-planar convex hull approach to define boundary conditions, from which, a Laplace equation was solved numerically to automatically separate bi-atrial chambers. To robustly estimate the AWT in each atrial chamber, coupled partial differential equations by coupling the Laplace solution with two surface trajectory functions were formulated and solved. Our pipeline enabled the reconstruction and visualization of the 3D AWT for bi-atrial chambers with a relative error of 8% and outperformed existing algorithms by >7%. Our approach can potentially lead to improved clinical diagnosis, patient stratification, and clinical guidance during ablation treatment for patients with AF.
Lernende Systeme oder Machine Learning, so sind sich Fachleute einig, werden auch in der Medizin und der Medizintechnik zukünftig eine große Bedeutung erlangen – mit Vorteilen aber auch mit Risiken für Patientinnen und Patienten, Unternehmen und Fachpersonal. Dabei ergeben sich verschiedenste Herausforderungen im Umgang mit Machine-Learning-Systemen – unter anderem für praktische Behandlungssituationen, für die Qualitätskontrolle, für die Sicherheit in Notfallsituationen oder die Bewertung der vom Computer vorgeschlagenen Diagnosen und Therapiepfade. Die vorliegende acatech POSITION ist das Ergebnis einer Arbeitsgruppe von Wissenschaftlerinnen und Wissenschaftlern aus Medizin und Technik. Die Projektgruppe gibt einen Überblick über heutige Anwendungen von Machine Learning in der Medizintechnik und beleuchtet wichtige zukünftige Anwendungsfelder. Im Fokus stehen darüber hinaus ethische, rechtliche und regulatorische Aspekte sowie kritische Fragen zum Datenschutz und mögliche Veränderungen im Arzt-Patienten-Verhältnis. Neben Vorschlägen zum Aufbau großer medizinischer Datenbanken gibt diese Position auch Handlungsempfehlungen für Ärztinnen und Ärzte, Einrichtungen der Forschungsförderung und die Politik.