Abstract
Reference intervals are essential for the interpretation of laboratory test results in medicine. We propose a novel indirect approach to estimate reference intervals from real-world data as ...an alternative to direct methods, which require samples from healthy individuals. The presented
refineR
algorithm separates the non-pathological distribution from the pathological distribution of observed test results using an inverse approach and identifies the model that best explains the non-pathological distribution. To evaluate its performance, we simulated test results from six common laboratory analytes with a varying location and fraction of pathological test results. Estimated reference intervals were compared to the ground truth, an alternative indirect method (
kosmic
), and the direct method (N = 120 and N = 400 samples). Overall,
refineR
achieved the lowest mean percentage error of all methods (2.77%). Analyzing the amount of reference intervals within ± 1 total error deviation from the ground truth,
refineR
(82.5%) was inferior to the direct method with N = 400 samples (90.1%), but outperformed
kosmic
(70.8%) and the direct method with N = 120 (67.4%). Additionally, reference intervals estimated from pediatric data were comparable to published direct method studies. In conclusion, the
refineR
algorithm enables precise estimation of reference intervals from real-world data and represents a viable complement to the direct method.
Clinical trials (CTs) are foundational to the advancement of evidence-based medicine and recruiting a sufficient number of participants is one of the crucial steps to their successful conduct. Yet, ...poor recruitment remains the most frequent reason for premature discontinuation or costly extension of clinical trials.
We designed and implemented a novel, open-source software system to support the recruitment process in clinical trials by generating automatic recruitment recommendations. The development is guided by modern, cloud-native design principles and based on Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) as an interoperability standard with the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) being used as a source of patient data. We evaluated the usability using the system usability scale (SUS) after deploying the application for use by study personnel.
The implementation is based on the OMOP CDM as a repository of patient data that is continuously queried for possible trial candidates based on given clinical trial eligibility criteria. A web-based screening list can be used to display the candidates and email notifications about possible new trial participants can be sent automatically. All interactions between services use HL7 FHIR as the communication standard. The system can be installed using standard container technology and supports more sophisticated deployments on Kubernetes clusters. End-users (n = 19) rated the system with a SUS score of 79.9/100.
We contribute a novel, open-source implementation to support the patient recruitment process in clinical trials that can be deployed using state-of-the art technologies. According to the SUS score, the system provides good usability.
•recruIT is an Open Source trial recruitment support tool based on FHIR and OMOP.•Uses modern development and deployment practices based on containers and Kubernetes.•Usability evaluation shows good usability with a SUS score of 79.9/100.
A scoping review of cloud computing in healthcare Griebel, Lena; Prokosch, Hans-Ulrich; Köpcke, Felix ...
BMC medical informatics and decision making,
03/2015, Letnik:
15, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Cloud computing is a recent and fast growing area of development in healthcare. Ubiquitous, on-demand access to virtually endless resources in combination with a pay-per-use model allow for new ways ...of developing, delivering and using services. Cloud computing is often used in an "OMICS-context", e.g. for computing in genomics, proteomics and molecular medicine, while other field of application still seem to be underrepresented. Thus, the objective of this scoping review was to identify the current state and hot topics in research on cloud computing in healthcare beyond this traditional domain.
MEDLINE was searched in July 2013 and in December 2014 for publications containing the terms "cloud computing" and "cloud-based". Each journal and conference article was categorized and summarized independently by two researchers who consolidated their findings.
102 publications have been analyzed and 6 main topics have been found: telemedicine/teleconsultation, medical imaging, public health and patient self-management, hospital management and information systems, therapy, and secondary use of data. Commonly used features are broad network access for sharing and accessing data and rapid elasticity to dynamically adapt to computing demands. Eight articles favor the pay-for-use characteristics of cloud-based services avoiding upfront investments. Nevertheless, while 22 articles present very general potentials of cloud computing in the medical domain and 66 articles describe conceptual or prototypic projects, only 14 articles report from successful implementations. Further, in many articles cloud computing is seen as an analogy to internet-/web-based data sharing and the characteristics of the particular cloud computing approach are unfortunately not really illustrated.
Even though cloud computing in healthcare is of growing interest only few successful implementations yet exist and many papers just use the term "cloud" synonymously for "using virtual machines" or "web-based" with no described benefit of the cloud paradigm. The biggest threat to the adoption in the healthcare domain is caused by involving external cloud partners: many issues of data safety and security are still to be solved. Until then, cloud computing is favored more for singular, individual features such as elasticity, pay-per-use and broad network access, rather than as cloud paradigm on its own.
Appropriate reference intervals are essential when using laboratory test results to guide medical decisions. Conventional approaches for the establishment of reference intervals rely on large samples ...from healthy and homogenous reference populations. However, this approach is associated with substantial financial and logistic challenges, subject to ethical restrictions in children, and limited in older individuals due to the high prevalence of chronic morbidities and medication. We implemented an indirect method for reference interval estimation, which uses mixed physiological and abnormal test results from clinical information systems, to overcome these restrictions. The algorithm minimizes the difference between an estimated parametrical distribution and a truncated part of the observed distribution, specifically, the Kolmogorov-Smirnov-distance between a hypothetical Gaussian distribution and the observed distribution of test results after Box-Cox-transformation. Simulations of common laboratory tests with increasing proportions of abnormal test results show reliable reference interval estimations even in challenging simulation scenarios, when <20% test results are abnormal. Additionally, reference intervals generated using samples from a university hospital's laboratory information system, with a gradually increasing proportion of abnormal test results remained stable, even if samples from units with a substantial prevalence of pathologies were included. A high-performance open-source C++ implementation is available at https://gitlab.miracum.org/kosmic.
Medical progress depends on the evaluation of new diagnostic and therapeutic interventions within clinical trials. Clinical trial recruitment support systems (CTRSS) aim to improve the recruitment ...process in terms of effectiveness and efficiency.
The goals were to (1) create an overview of all CTRSS reported until the end of 2013, (2) find and describe similarities in design, (3) theorize on the reasons for different approaches, and (4) examine whether projects were able to illustrate the impact of CTRSS.
We searched PubMed titles, abstracts, and keywords for terms related to CTRSS research. Query results were classified according to clinical context, workflow integration, knowledge and data sources, reasoning algorithm, and outcome.
A total of 101 papers on 79 different systems were found. Most lacked details in one or more categories. There were 3 different CTRSS that dominated: (1) systems for the retrospective identification of trial participants based on existing clinical data, typically through Structured Query Language (SQL) queries on relational databases, (2) systems that monitored the appearance of a key event of an existing health information technology component in which the occurrence of the event caused a comprehensive eligibility test for a patient or was directly communicated to the researcher, and (3) independent systems that required a user to enter patient data into an interface to trigger an eligibility assessment. Although the treating physician was required to act for the patient in older systems, it is now becoming increasingly popular to offer this possibility directly to the patient.
Many CTRSS are designed to fit the existing infrastructure of a clinical care provider or the particularities of a trial. We conclude that the success of a CTRSS depends more on its successful workflow integration than on sophisticated reasoning and data processing algorithms. Furthermore, some of the most recent literature suggest that an increase in recruited patients and improvements in recruitment efficiency can be expected, although the former will depend on the error rate of the recruitment process being replaced. Finally, to increase the quality of future CTRSS reports, we propose a checklist of items that should be included.
Currently, there is no curative treatment for dementia. The implementation of preventive measures is of great importance. Therefore, it is necessary to identify and address individual and modifiable ...risk factors. Social isolation, defined through social networks, is a factor that may influence the onset and progression of the disease. The networks of older people are mostly composed of either family or friends. The aim of this study is to examine the influence of social isolation and network composition on cognition over the course of 12 months in people with cognitive impairment. Data basis is the multicentre, prospective, longitudinal register study 'Digital Dementia Registery Bavaria-digiDEM Bayern'. The degree of social isolation was assessed using the Lubben Social Network Scale- Revised (LSNS-R) and the degree of cognitive impairment using the Mini Mental State Examination (MMSE), conducted at baseline and after 12 months. Data were analysed using pre-post ANCOVA, adjusted for baseline MMSE, age, gender, education, living situation and Barthel-Index. 106 subjects (78.9 ± 8.2 years; 66% female) were included in the analysis. The mean MMSE score at baseline was 24.3 (SD = 3.6). Within the friendship subscore, risk for social isolation was highly prevalent (42.5%). Though, there was no difference between individuals with higher/ lower risk of social isolation within the friendship-network after adjusting for common risk factors in cognitive decline over time, F (1,98) = .046, p = .831, partial eta.sup.2 = .000. The results of this study showed that the risk of social isolation from friends is very high among people with cognitive impairment. However, social isolation does not appear to have a bearing influence on the course of cognition. Nevertheless, it is important for people with cognitive impairment to promote and maintain close social contacts with friends.
Abstract
Reference intervals are essential for interpreting laboratory test results. Continuous reference intervals precisely capture physiological age-specific dynamics that occur throughout life, ...and thus have the potential to improve clinical decision-making. However, established approaches for estimating continuous reference intervals require samples from healthy individuals, and are therefore substantially restricted. Indirect methods operating on routine measurements enable the estimation of one-dimensional reference intervals, however, no automated approach exists that integrates the dependency on a continuous covariate like age. We propose an integrated pipeline for the fully automated estimation of continuous reference intervals expressed as a generalized additive model for location, scale and shape based on discrete model estimates using an indirect method (refineR). The results are free of subjective user-input, enable conversion of test results into z-scores and can be integrated into laboratory information systems. Comparison of our results to established and validated reference intervals from the CALIPER and PEDREF studies and manufacturers’ package inserts shows good agreement of reference limits, indicating that the proposed pipeline generates high-quality results. In conclusion, the developed pipeline enables the generation of high-precision percentile charts and continuous reference intervals. It represents the first parameter-less and fully automated solution for the indirect estimation of continuous reference intervals.
Quantification of DNA methylation in neoplastic cells is crucial both from mechanistic and diagnostic perspectives. However, such measurements are prone to different experimental biases. Polymerase ...chain reaction (PCR) bias results in an unequal recovery of methylated and unmethylated alleles at the sample preparation step. Post‐PCR biases get introduced additionally by the readout processes. Correcting the biases is more practicable than optimising experimental conditions, as demonstrated previously. However, utilisation of our earlier developed algorithm strongly necessitates automation. Here, we present two R packages: rBiasCorrection, the core algorithms to correct biases; and BiasCorrector, its web‐based graphical user interface frontend. The software detects and analyses experimental biases in calibration DNA samples at a single base resolution by using cubic polynomial and hyperbolic regression. The correction coefficients from the best regression type are employed to compensate for the bias. Three common technologies—bisulphite pyrosequencing, next‐generation sequencing and oligonucleotide microarrays—were used to comprehensively test BiasCorrector. We demonstrate the accuracy of BiasCorrector's performance and reveal technology‐specific PCR‐ and post‐PCR biases. BiasCorrector effectively eliminates biases regardless of their nature, locus, the number of interrogated methylation sites and the detection method, thus representing a user‐friendly tool for producing accurate epigenetic results.
Indirect methods leverage real-world data for the estimation of reference intervals. These constitute an active field of research, and several methods have been developed recently. So far, no ...standardized tool for evaluation and comparison of indirect methods exists.
We provide RIbench, a benchmarking suite for quantitative evaluation of any existing or novel indirect method. The benchmark contains simulated test sets for 10 biomarkers mimicking routine measurements of a mixed distribution of non-pathological (reference) values and pathological values. The non-pathological distributions represent 4 common distribution types: normal, skewed, heavily skewed, and skewed-and-shifted. To identify strengths and weaknesses of indirect methods, test sets have varying sample sizes and pathological distributions differ in location, extent of overlap, and fraction. For performance evaluation, we use an overall benchmark score and sub-scores derived from absolute z-score deviations between estimated and true reference limits. We illustrate the application of RIbench by evaluating and comparing the Hoffmann method and 4 modern indirect methods -TML (Truncated-Maximum-Likelihood), kosmic, TMC (Truncated-Minimum-Chi-Square), and refineR- against one another and against a nonparametric direct method (n = 120).
For the modern indirect methods, pathological fraction and sample size had a strong influence on the results: With a pathological fraction up to 20% and a minimum sample size of 5000, most methods achieved results comparable or superior to the direct method.
We present RIbench, an open-source R-package, for the systematic evaluation of existing and novel indirect methods. RIbench can serve as a tool for enhancement of indirect methods, improving the estimation of reference intervals.
BACKGROUNDMedication prescription is a complex process that could benefit from current research and development in machine learning through decision support systems. Particularly pediatricians are ...forced to prescribe medications "off-label" as children are still underrepresented in clinical studies, which leads to a high risk of an incorrect dose and adverse drug effects.METHODSPubMed, IEEE Xplore and PROSPERO were searched for relevant studies that developed and evaluated well-performing machine learning algorithms following the PRISMA statement. Quality assessment was conducted in accordance with the IJMEDI checklist. Identified studies were reviewed in detail, including the required variables for predicting the correct dose, especially of pediatric medication prescription.RESULTSThe search identified 656 studies, of which 64 were reviewed in detail and 36 met the inclusion criteria. According to the IJMEDI checklist, five studies were considered to be of high quality. 19 of the 36 studies dealt with the active substance warfarin. Overall, machine learning algorithms based on decision trees or regression methods performed superior regarding their predictive power than algorithms based on neural networks, support vector machines or other methods. The use of ensemble methods like bagging or boosting generally enhanced the accuracy of the dose predictions. The required input and output variables of the algorithms were considerably heterogeneous and differ strongly among the respective substance.CONCLUSIONSBy using machine learning algorithms, the prescription process could be simplified and dosing correctness could be enhanced. Despite the heterogenous results among the different substances and cases and the lack of pediatric use cases, the identified approaches and required variables can serve as an excellent starting point for further development of algorithms predicting drug doses, particularly for children. Especially the combination of physiologically-based pharmacokinetic models with machine learning algorithms represents a great opportunity to enhance the predictive power and accuracy of the developed algorithms.