In recent years, metabarcoding has become the method of choice for investigating the composition and assembly of microbial eukaryotic communities. The number of environmental data sets published has ...increased very rapidly. Although unprocessed sequence files are often publicly available, processed data, in particular clustered sequences, are rarely available in a usable format. Clustered sequences are reported as operational taxonomic units (OTUs) with different similarity levels or more recently as amplicon sequence variants (ASVs). This hampers comparative studies between different environments and data sets, for example examining the biogeographical patterns of specific groups/species, as well analysing the genetic microdiversity within these groups. Here, we present a newly-assembled database of processed 18S rRNA metabarcodes that are annotated with the PR2 reference sequence database. This database, called metaPR(2), contains 41 data sets corresponding to more than 4000 samples and 90,000 ASVs. The database, which is accessible through both a web-based interface () and an R package, should prove very useful to all researchers working on protist diversity in a variety of systems.
Geometric morphometric (GM) tools are essential for meaningfully quantifying and understanding patterns of variation in complex traits like shape. In this field, the breadth of answerable questions ...has grown dramatically in recent years through the development of new analyses and increased computational efficiency.
In this note, we describe the ways in which geomorph, a widely used R package for quantifying and analysing GM data, has grown with the field.
We present geomorph v4.0 and describe the ways in which this version has dramatically improved upon previous versions. We also present a new graphical user interface for easy implementation, gmShiny.
These contributions position geomorph to be the primary tool for GM analyses, particularly those employing a phylogenetic comparative approach.
Released 4 years ago, the Wallace EcoMod application (R package wallace) provided an open‐source and interactive platform for modeling species niches and distributions that served as a reproducible ...toolbox and educational resource. wallace harnesses R package tools documented in the literature and makes them available via a graphical user interface that runs analyses and returns code to document and reproduce them. Since its release, feedback from users and partners helped identify key areas for advancement, leading to the development of wallace 2. Following the vision of growth by community expansion, the core development team engaged with collaborators and undertook a major restructuring of the application to enable: simplified addition of custom modules to expand methodological options, analyses for multiple species in the same session, improved metadata features, new database connections, and saving/loading sessions. wallace 2 features nine new modules and added functionalities that facilitate data acquisition from climate‐simulation, botanical and paleontological databases; custom data inputs; model metadata tracking; and citations for R packages used (to promote documentation and give credit to developers). Three of these modules compose a new component for environmental space analyses (e.g., niche overlap). This expansion was paired with outreach to the biogeography and biodiversity communities, including international presentations and workshops that take advantage of the software's extensive guidance text. Additionally, the advances extend accessibility with a cloud‐computing implementation and include a suite of comprehensive unit tests. The features in wallace 2 greatly improve its expandability, breadth of analyses, and reproducibility options, including the use of emerging metadata standards. The new architecture serves as an example for other modular software, especially those developed using the rapidly proliferating R package shiny, by showcasing straightforward module ingestion and unit testing. Importantly, wallace 2 sets the stage for future expansions, including those enabling biodiversity estimation and threat assessments for conservation.
Multiple studies have reported on dermoscopic structures in basal cell carcinoma (BCC) and its subtypes, with varying results.
To systematically review the prevalence of dermoscopic structures in BCC ...and its subtypes.
Databases and reference lists were searched for relevant trials according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Studies were assessed for the relative proportion of BCC dermoscopic features. Random-effects models were used to estimate summary effect sizes.
Included were 31 studies consisting of 5950 BCCs. The most common dermoscopic features seen in BCC were arborizing vessels (59%), shiny white structures (49%), and large blue-grey ovoid nests (34%). Arborizing vessels, ulceration, and blue-grey ovoid nests and globules were most common in nodular BCC; short-fine telangiectasia, multiple small erosions, and leaf-like, spoke wheel and concentric structures in superficial BCC; porcelain white areas and arborizing vessels in morpheaform BCC; and arborizing vessels and ulceration in infiltrative BCC.
Studies had significant heterogeneity. Studies reporting BCC histopathologic subtypes did not provide clinical data on pigmentation of lesions.
In addition to arborizing vessels, shiny white structures are a common feature of BCC. A constellation of dermoscopic features may aid in differentiating between BCC histopathologic subtypes.
Model fit assessment is a central component of evaluating confirmatory factor analysis models and the validity of psychological assessments. Fit indices remain popular and researchers often judge fit ...with fixed cutoffs derived by Hu and Bentler (1999). Despite their overwhelming popularity, methodological studies have cautioned against fixed cutoffs, noting that the meaning of fit indices varies based on a complex interaction of model characteristics like factor reliability, number of items, and number of factors. Criticism of fixed cutoffs stems primarily from the fact that they were derived from one specific confirmatory factor analysis model and lack generalizability. To address this, we propose a simulation-based method called dynamic fit index cutoffs such that derivation of cutoffs is adaptively tailored to the specific model and data characteristics being evaluated. Unlike previously proposed simulation-based techniques, our method removes existing barriers to implementation by providing an open-source, Web based Shiny software application that automates the entire process so that users neither need to manually write any software code nor be knowledgeable about foundations of Monte Carlo simulation. Additionally, we extend fit index cutoff derivations to include sets of cutoffs for multiple levels of misspecification. In doing so, fit indices can more closely resemble their originally intended purpose as effect sizes quantifying misfit rather than improperly functioning as ad hoc hypothesis tests. We also provide an approach specifically designed for the nuances of 1-factor models, which have received surprisingly little attention in the literature despite frequent substantive interests in unidimensionality.
Translational AbstractEvaluating confirmatory factor model fit through the lens of "approximate fit" has enjoyed widespread - though not universal - adoption in empirical studies and is valued for its goal to assess whether the model may be practically useful, even if fit is not exact. An obstacle with approximate fit is that there is ambiguity regarding what is considered "practically useful". Hu and Bentler (1999) addressed this issue with an expansive study to provide guidelines for values indicating that a model demonstrates reasonable approximate fit. Though their suggestions remain widely used today and are engrained in the literature, their suggested cutoffs have unfortunately been shown to vary widely depending on context. That is, values that indicate great fit in one context may indicate poor fit in another. The inherent problem is that the Hu and Bentler cutoffs are fixed values, so they arbitrarily benefit some models and arbitrarily punish others. Previous literature has suggested custom simulation methods that take the logic of Hu and Bentler's approach and apply it to the individual model being evaluated. In this way, researchers can obtain values indicative of good approximate fit in their specific circumstances to avoid fixed cutoffs. Though a clever solution, such an approach has seen little uptake presumably because many researchers are not well-versed in conducting simulations. This paper addresses the issue by providing a method and software application that builds and executes a Hu-and- Bentler-style simulation from model output. In this way, researchers can benefit from modern computational resources without a deep programming background.
Phase-based fringe projection methods have been commonly used for three-dimensional (3D) measurements. However, image saturation results in incorrect intensities in captured fringe pattern images, ...leading to phase and measurement errors. Existing solutions are complex. This paper proposes an adaptive projection intensity adjustment method to avoid image saturation and maintain good fringe modulation in measuring objects with a high range of surface reflectivities. The adapted fringe patterns are created using only one prior step of fringe-pattern projection and image capture. First, a set of phase-shifted fringe patterns with maximum projection intensity value of 255 and a uniform gray level pattern are projected onto the surface of an object. The patterns are reflected from and deformed by the object surface and captured by a digital camera. The best projection intensities corresponding to each saturated-pixel clusters are determined by fitting a polynomial function to transform captured intensities to projected intensities. Subsequently, the adapted fringe patterns are constructed using the best projection intensities at projector pixel coordinate. Finally, the adapted fringe patterns are projected for phase recovery and 3D shape calculation. The experimental results demonstrate that the proposed method achieves high measurement accuracy even for objects with a high range of surface reflectivities.
Species distribution models (SDMs) constitute the most common class of models across ecology, evolution and conservation. The advent of ready‐to‐use software packages and increasing availability of ...digital geoinformation have considerably assisted the application of SDMs in the past decade, greatly enabling their broader use for informing conservation and management, and for quantifying impacts from global change. However, models must be fit for purpose, with all important aspects of their development and applications properly considered. Despite the widespread use of SDMs, standardisation and documentation of modelling protocols remain limited, which makes it hard to assess whether development steps are appropriate for end use. To address these issues, we propose a standard protocol for reporting SDMs, with an emphasis on describing how a study's objective is achieved through a series of modeling decisions. We call this the ODMAP (Overview, Data, Model, Assessment and Prediction) protocol, as its components reflect the main steps involved in building SDMs and other empirically‐based biodiversity models. The ODMAP protocol serves two main purposes. First, it provides a checklist for authors, detailing key steps for model building and analyses, and thus represents a quick guide and generic workflow for modern SDMs. Second, it introduces a structured format for documenting and communicating the models, ensuring transparency and reproducibility, facilitating peer review and expert evaluation of model quality, as well as meta‐analyses. We detail all elements of ODMAP, and explain how it can be used for different model objectives and applications, and how it complements efforts to store associated metadata and define modelling standards. We illustrate its utility by revisiting nine previously published case studies, and provide an interactive web‐based application to facilitate its use. We plan to advance ODMAP by encouraging its further refinement and adoption by the scientific community.
The current paper highlights a new, interactive Shiny App that can be used to aid in understanding and teaching the important task of conducting a prior sensitivity analysis when implementing ...Bayesian estimation methods. In this paper, we discuss the importance of examining prior distributions through a sensitivity analysis. We argue that conducting a prior sensitivity analysis is equally important when so-called diffuse priors are implemented as it is with subjective priors. As a proof of concept, we conducted a small simulation study, which illustrates the impact of priors on final model estimates. The findings from the simulation study highlight the importance of conducting a sensitivity analysis of priors. This concept is further extended through an interactive Shiny App that we developed. The Shiny App allows users to explore the impact of various forms of priors using empirical data. We introduce this Shiny App and thoroughly detail an example using a simple multiple regression model that users at all levels can understand. In this paper, we highlight how to determine the different settings for a prior sensitivity analysis, how to visually and statistically compare results obtained in the sensitivity analysis, and how to display findings and write up disparate results obtained across the sensitivity analysis. The goal is that novice users can follow the process outlined here and work within the interactive Shiny App to gain a deeper understanding of the role of prior distributions and the importance of a sensitivity analysis when implementing Bayesian methods. The intended audience is broad (e.g., undergraduate or graduate students, faculty, and other researchers) and can include those with limited exposure to Bayesian methods or the specific model presented here.
When applying secondary analysis on published survival data, it is critical to obtain each patient's raw data, because the individual patient data (IPD) approach has been considered as the gold ...standard of data analysis. However, researchers often lack access to IPD. We aim to propose a straightforward and robust approach to obtain IPD from published survival curves with a user-friendly software platform.
Improving upon existing methods, we propose an easy-to-use, two-stage approach to reconstruct IPD from published Kaplan-Meier (K-M) curves. Stage 1 extracts raw data coordinates and Stage 2 reconstructs IPD using the proposed method. To facilitate the use of the proposed method, we developed the R package IPDfromKM and an accompanying web-based Shiny application. Both the R package and Shiny application have an "all-in-one" feature such that users can use them to extract raw data coordinates from published K-M curves, reconstruct IPD from the extracted data coordinates, visualize the reconstructed IPD, assess the accuracy of the reconstruction, and perform secondary analysis on the basis of the reconstructed IPD. We illustrate the use of the R package and the Shiny application with K-M curves from published studies. Extensive simulations and real-world data applications demonstrate that the proposed method has high accuracy and great reliability in estimating the number of events, number of patients at risk, survival probabilities, median survival times, and hazard ratios.
IPDfromKM has great flexibility and accuracy to reconstruct IPD from published K-M curves with different shapes. We believe that the R package and the Shiny application will greatly facilitate the potential use of quality IPD and advance the use of secondary data to facilitate informed decision making in medical research.
Investors, practitioners, and stock researchers highly need data related to financial performance to predict a company's financial health condition, which is used as a basis to consider investing in ...it. The Indonesia Stock Exchange (IDX) website provides reports on the company's financial performance. Unfortunately, the company’s financial data found on the IDX website are in PDF format, and researchers must download them one by one, which takes a long time. This study presents a website-based application, named Indonesia Company Performance (INDCOMP), built using the R programming language and involving various R packages and frameworks to assist investors, practitioners, and stock researchers in studying the financial performance of companies. This application can help users quickly access the financial performance data of various companies, present financial performance data in data tables, and perform data visualizations as well as statistical analyses.