A hierarchical approach to learn from visual field data was adopted to identify glaucomatous visual field defect patterns and to detect glaucomatous progression. The analysis pipeline included three ...stages, namely, clustering, glaucoma boundary limit detection, and glaucoma progression detection testing. First, cross-sectional visual field tests collected from each subject were clustered using a mixture of Gaussians and model parameters were estimated using expectation maximization. The visual field clusters were further estimated to recognize glaucomatous visual field defect patterns by decomposing each cluster into several axes. The glaucoma visual field defect patterns along each axis then were identified. To derive a definition of progression, the longitudinal visual fields of stable glaucoma eyes on the abnormal cluster axes were projected and the slope was approximated using linear regression (LR) to determine the confidence limit of each axis. For glaucoma progression detection, the longitudinal visual fields of each eye on the abnormal cluster axes were projected and the slope was approximated by LR. Progression was assigned if the progression rate was greater than the boundary limit of the stable eyes; otherwise, stability was assumed. The proposed method was compared to a recently developed progression detection method and to clinically available glaucoma progression detection software. The clinical accuracy of the proposed pipeline was as good as or better than the currently available methods.
Background: The aim of this study was to evaluate the effect of bupivacaine with dexmedetomidine in comparison with bupivacaine during supraclavicular block with ultrasound guide on hemodynamics of ...patients undergoing upper limb orthopedic surgery. Methods: Eighty patients (40 patients in each group) who were candidates for upper limb orthopedic surgery randomly received 30 ml of bupivacaine alone (group 1) or 30 ml of bupivacaine with 20 μg of dexmedetomidine (group 2). Supraclavicular nerve block was performed using ultrasound guide. Patients' hemodynamic data (including mean arterial blood pressure, heart rate per minute, respiration rate per minute, and peripheral blood oxygen saturation), onset of action, and duration of sensory-motor block were compared between the two groups. Results: The mean arterial blood pressure during surgery in group 2 was lower than group 1, but the differences were not statistically significant. The onset of sensory and motor block in group 2 was significantly shorter than in group 1 (P = 0.0001). The duration of sensory and motor block in group 2 was significantly longer than group 1 (P = 0.0001) During this study, none of the patients had hemodynamic disturbance or surgical complications. Conclusion: Addition of dexmedetomidine to bupivacaine during supraclavicular block in addition to hemodynamic stability of the patient during surgery increases the duration of sensory and motor block.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
Detection of early clinical keratoconus (KCN) is a challenging task, even for expert clinicians. In this study, we propose a deep learning (DL) model to address this challenge. We first used Xception ...and InceptionResNetV2 DL architectures to extract features from three different corneal maps collected from 1371 eyes examined in an eye clinic in Egypt. We then fused features using Xception and InceptionResNetV2 to detect subclinical forms of KCN more accurately and robustly. We obtained an area under the receiver operating characteristic curves (AUC) of 0.99 and an accuracy range of 97-100% to distinguish normal eyes from eyes with subclinical and established KCN. We further validated the model based on an independent dataset with 213 eyes examined in Iraq and obtained AUCs of 0.91-0.92 and an accuracy range of 88-92%. The proposed model is a step toward improving the detection of clinical and subclinical forms of KCN.
Full text
Available for:
IZUM, KILJ, NUK, PILJ, PNG, SAZU, UL, UM, UPUK
To assess the performance of convolutional neural networks (CNNs) for automated detection of keratoconus (KC) in standalone Scheimpflug-based dynamic corneal deformation videos.
Retrospective cohort ...study.
We retrospectively analyzed datasets with records of 734 nonconsecutive, refractive surgery candidates, and patients with unilateral or bilateral KC.
We first developed a video preprocessing pipeline to translate dynamic corneal deformation videos into 3-dimensional pseudoimage representations and then trained a CNN to directly identify KC from pseudoimages. We calculated the model's KC probability score cut-off and evaluated the performance by subjective and objective accuracy metrics using 2 independent datasets.
Area under the receiver operating characteristics curve (AUC), accuracy, specificity, sensitivity, and KC probability score.
The model accuracy on the test subset was 0.89 with AUC of 0.94. Based on the external validation dataset, the AUC and accuracy of the CNN model for detecting KC were 0.93 and 0.88, respectively.
Our deep learning-based approach was highly sensitive and specific in separating normal from keratoconic eyes using dynamic corneal deformation videos at levels that may prove useful in clinical practice.
Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Full text
Available for:
GEOZS, IJS, IMTLJ, KILJ, KISLJ, NLZOH, NUK, OILJ, PNG, SAZU, SBCE, SBJE, UILJ, UL, UM, UPCLJ, UPUK, ZAGLJ, ZRSKP
Keratoconus (KCN) is an eye condition that affects the cornea. The main objective of this study is to evaluate the accuracy of keratoconus detection from corneal parameters including elevation, ...topography and pachymetry using machine learning algorithms. We developed several machine learning models to detect keratoconus from corneal elevation, topography and pachymetry parameters that were obtained from 5881 eyes of 2800 patients in Brazil using a Pentacam Scheimpflug instrument. Elevation parameters provided the highest area under the curve (AUC) parameter of 0.99 in detecting normal from keratoconus cases and an AUC of 0.88 in detecting different severity levels when using only three most promising corneal parameters including minimum curvature radius, eccentricity of the cornea and asphericity of the cornea. The developed algorithm can distinguish early KCN eyes from healthy eyes with a high accuracy obtaining an AUC of 0.97. From a clinical point of view the detection of early KCN is very important because KCN patients are usually misdiagnosed due to early symptoms. Results suggest that elevation parameters may retain more useful information for detecting keratoconus than historically believed.
Objective
To assess the accuracy of probabilistic deep learning models to discriminate normal eyes and eyes with glaucoma from fundus photographs and visual fields.
Design
Algorithm development for ...discriminating normal and glaucoma eyes using data from multicenter, cross-sectional, case-control study.
Subjects and participants
Fundus photograph and visual field data from 1,655 eyes of 929 normal and glaucoma subjects to develop and test deep learning models and an independent group of 196 eyes of 98 normal and glaucoma patients to validate deep learning models.
Main outcome measures
Accuracy and area under the receiver-operating characteristic curve (AUC).
Methods
Fundus photographs and OCT images were carefully examined by clinicians to identify glaucomatous optic neuropathy (GON). When GON was detected by the reader, the finding was further evaluated by another clinician. Three probabilistic deep convolutional neural network (CNN) models were developed using 1,655 fundus photographs, 1,655 visual fields, and 1,655 pairs of fundus photographs and visual fields collected from Compass instruments. Deep learning models were trained and tested using 80% of fundus photographs and visual fields for training set and 20% of the data for testing set. Models were further validated using an independent validation dataset. The performance of the probabilistic deep learning model was compared with that of the corresponding deterministic CNN model.
Results
The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and combined modalities using development dataset were 0.90 (95% confidence interval: 0.89–0.92), 0.89 (0.88–0.91), and 0.94 (0.92–0.96), respectively. The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and both modalities using the independent validation dataset were 0.94 (0.92–0.95), 0.98 (0.98–0.99), and 0.98 (0.98–0.99), respectively. The AUC of the deep learning model in detecting glaucoma from fundus photographs, visual fields, and both modalities using an early glaucoma subset were 0.90 (0.88,0.91), 0.74 (0.73,0.75), 0.91 (0.89,0.93), respectively. Eyes that were misclassified had significantly higher uncertainty in likelihood of diagnosis compared to eyes that were classified correctly. The uncertainty level of the correctly classified eyes is much lower in the combined model compared to the model based on visual fields only. The AUCs of the deterministic CNN model using fundus images, visual field, and combined modalities based on the development dataset were 0.87 (0.85,0.90), 0.88 (0.84,0.91), and 0.91 (0.89,0.94), and the AUCs based on the independent validation dataset were 0.91 (0.89,0.93), 0.97 (0.95,0.99), and 0.97 (0.96,0.99), respectively, while the AUCs based on an early glaucoma subset were 0.88 (0.86,0.91), 0.75 (0.73,0.77), and 0.92 (0.89,0.95), respectively.
Conclusion and relevance
Probabilistic deep learning models can detect glaucoma from multi-modal data with high accuracy. Our findings suggest that models based on combined visual field and fundus photograph modalities detects glaucoma with higher accuracy. While probabilistic and deterministic CNN models provided similar performance, probabilistic models generate certainty level of the outcome thus providing another level of confidence in decision making.
In this paper, we present an automated method for article classification, leveraging the power of large language models (LLMs).
The aim of this study is to evaluate the applicability of various LLMs ...based on textual content of scientific ophthalmology papers.
We developed a model based on natural language processing techniques, including advanced LLMs, to process and analyze the textual content of scientific papers. Specifically, we used zero-shot learning LLMs and compared Bidirectional and Auto-Regressive Transformers (BART) and its variants with Bidirectional Encoder Representations from Transformers (BERT) and its variants, such as distilBERT, SciBERT, PubmedBERT, and BioBERT. To evaluate the LLMs, we compiled a data set (retinal diseases RenD ) of 1000 ocular disease-related articles, which were expertly annotated by a panel of 6 specialists into 19 distinct categories. In addition to the classification of articles, we also performed analysis on different classified groups to find the patterns and trends in the field.
The classification results demonstrate the effectiveness of LLMs in categorizing a large number of ophthalmology papers without human intervention. The model achieved a mean accuracy of 0.86 and a mean F
-score of 0.85 based on the RenD data set.
The proposed framework achieves notable improvements in both accuracy and efficiency. Its application in the domain of ophthalmology showcases its potential for knowledge organization and retrieval. We performed a trend analysis that enables researchers and clinicians to easily categorize and retrieve relevant papers, saving time and effort in literature review and information gathering as well as identification of emerging scientific trends within different disciplines. Moreover, the extendibility of the model to other scientific fields broadens its impact in facilitating research and trend analysis across diverse disciplines.
The variational Bayesian independent component analysis-mixture model (VIM), an unsupervised machine-learning classifier, was used to automatically separate Matrix Frequency Doubling Technology (FDT) ...perimetry data into clusters of healthy and glaucomatous eyes, and to identify axes representing statistically independent patterns of defect in the glaucoma clusters.
FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal FDT results from the UCSD-based Diagnostic Innovations in Glaucoma Study (DIGS) and African Descent and Glaucoma Evaluation Study (ADAGES). For all eyes, VIM input was 52 threshold test points from the 24-2 test pattern, plus age.
FDT mean deviation was -1.00 dB (S.D. = 2.80 dB) and -5.57 dB (S.D. = 5.09 dB) in FDT-normal eyes and FDT-abnormal eyes, respectively (p<0.001). VIM identified meaningful clusters of FDT data and positioned a set of statistically independent axes through the mean of each cluster. The optimal VIM model separated the FDT fields into 3 clusters. Cluster N contained primarily normal fields (1109/1190, specificity 93.1%) and clusters G1 and G2 combined, contained primarily abnormal fields (651/786, sensitivity 82.8%). For clusters G1 and G2 the optimal number of axes were 2 and 5, respectively. Patterns automatically generated along axes within the glaucoma clusters were similar to those known to be indicative of glaucoma. Fields located farther from the normal mean on each glaucoma axis showed increasing field defect severity.
VIM successfully separated FDT fields from healthy and glaucoma eyes without a priori information about class membership, and identified familiar glaucomatous patterns of loss.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Machine learning models have recently provided great promise in diagnosis of several ophthalmic disorders, including keratoconus (KCN). Keratoconus, a noninflammatory ectatic corneal disorder ...characterized by progressive cornea thinning, is challenging to detect as signs may be subtle. Several machine learning models have been proposed to detect KCN, however most of the models are supervised and thus require large well-annotated data. This paper proposes a new unsupervised model to detect KCN, based on adapted flower pollination algorithm (FPA) and the k-means algorithm. We will evaluate the proposed models using corneal data collected from 5430 eyes at different stages of KCN severity (1520 healthy, 331 KCN1, 1319 KCN2, 1699 KCN3 and 579 KCN4) from Department of Ophthalmology and Visual Sciences, Paulista Medical School, Federal University of São Paulo, São Paulo in Brazil and 1531 eyes (Healthy = 400, KCN1 = 378, KCN2 = 285, KCN3 = 200, KCN4 = 88) from Department of Ophthalmology, Jichi Medical University, Tochigi in Japan and used several accuracy metrics including Precision, Recall, F-Score, and Purity. We compared the proposed method with three other standard unsupervised algorithms including k-means, Kmedoids, and Spectral cluster. Based on two independent datasets, the proposed model outperformed the other algorithms, and thus could provide improved identification of the corneal status of the patients with keratoconus.