Network Meta-analysis White, Ian R.
The Stata journal,
12/2015, Letnik:
15, Številka:
4
Journal Article
Recenzirano
Odprti dostop
Network meta-analysis is a popular way to combine results from several studies (usually randomized trials) comparing several treatments or interventions. It has usually been performed in a Bayesian ...setting, but recently it has become possible in a frequentist setting using multivariate meta-analysis and meta-regression, implemented in Stata with mvmeta. I describe a suite of Stata programs for network meta-analysis that perform the necessary data manipulation, fit consistency and inconsistency models using mvmeta, and produce various graphics.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, OILJ, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
Simulation studies are computer experiments that involve creating data by pseudo‐random sampling. A key strength of simulation studies is the ability to understand the behavior of statistical methods ...because some “truth” (usually some parameter/s of interest) is known from the process of generating the data. This allows us to consider properties of methods, such as bias. While widely used, simulation studies are often poorly designed, analyzed, and reported. This tutorial outlines the rationale for using simulation studies and offers guidance for design, execution, analysis, reporting, and presentation. In particular, this tutorial provides a structured approach for planning and reporting simulation studies, which involves defining aims, data‐generating mechanisms, estimands, methods, and performance measures (“ADEMP”); coherent terminology for simulation studies; guidance on coding simulation studies; a critical discussion of key performance measures and their estimation; guidance on structuring tabular and graphical presentation of results; and new graphical presentations. With a view to describing recent practice, we review 100 articles taken from Volume 34 of Statistics in Medicine, which included at least one simulation study and identify areas for improvement.
Optofluidics - the synergistic integration of photonics and microfluidics - has recently emerged as a new analytical field that provides a number of unique characteristics for enhanced sensing ...performance and simplification of microsystems. In this review, we describe various optofluidic architectures developed in the past five years, emphasize the mechanisms by which optofluidics enhances bio/chemical analysis capabilities, including sensing and the precise control of biological micro/nanoparticles, and envision new research directions to which optofluidics leads.
Missing data are a common occurrence in real datasets. For epidemiological and prognostic factors studies in medicine, multiple imputation is becoming the standard route to estimating models with ...missing covariate data under a missing-at-random assumption. We describe ice, an implementation in Stata of the MICE approach to multiple imputation. Real data from an observational study in ovarian cancer are used to illustrate the most important of the many options available with ice. We remark brie y on the new databasearchitecture and procedures for multiple imputation introduced in releases 11 and 12 of Stata.
Multiple imputation is a commonly used method for handling incomplete covariates as it can provide valid inference when data are missing at random. This depends on being able to correctly specify the ...parametric model used to impute missing values, which may be difficult in many realistic settings. Imputation by predictive mean matching (PMM) borrows an observed value from a donor with a similar predictive mean; imputation by local residual draws (LRD) instead borrows the donor's residual. Both methods relax some assumptions of parametric imputation, promising greater robustness when the imputation model is misspecified.
We review development of PMM and LRD and outline the various forms available, and aim to clarify some choices about how and when they should be used. We compare performance to fully parametric imputation in simulation studies, first when the imputation model is correctly specified and then when it is misspecified.
In using PMM or LRD we strongly caution against using a single donor, the default value in some implementations, and instead advocate sampling from a pool of around 10 donors. We also clarify which matching metric is best. Among the current MI software there are several poor implementations.
PMM and LRD may have a role for imputing covariates (i) which are not strongly associated with outcome, and (ii) when the imputation model is thought to be slightly but not grossly misspecified. Researchers should spend efforts on specifying the imputation model correctly, rather than expecting predictive mean matching or local residual draws to do the work.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
For decades surface enhanced Raman spectroscopy (SERS) has been intensely investigated as a possible solution for performing analytical chemistry at the point of sample origin. Unfortunately, due to ...cost and usability constraints, conventional rigid SERS sensors and microfluidic SERS sensors have yet to make a significant impact outside of the realm of academics. However, the recently introduced flexible and porous paper-based SERS sensors are proving to be widely adaptable to realistic usage cases in the field. In contrast to rigid and microfluidic SERS sensors, paper SERS sensors feature (i) the potential for roll-to-roll manufacturing methods that enable low sensor cost, (ii) simple sample collection directly onto the sensor via swabbing or dipping, and (iii) equipment-free separations for sample cleanup. In this review we argue that movement to paper-based SERS sensors will finally enable point-of-sample analytical chemistry applications. In addition, we present and compare the numerous fabrication techniques for paper SERS sensors and we describe various sample collection and sample clean-up capabilities of paper SERS sensors, with a focus on how these features enable practical applications in the field. Finally, we present our expectations for the future, including emerging ideas inspired by paper SERS.
Display omitted
•SERS sensors on paper offer one of the best opportunities for analytical chemistry in the field.•Paper SERS sensors have a lower production cost than conventional and microfluidic SERS sensors.•Paper SERS sensors innately offer simple sample collection and sample processing capabilities.•Emerging methods inspired by paper SERS sensors may offer unprecedented ease of use.
An extension of mvmeta, my program for multivariate random-effects meta-analysis, is described. The extension handles meta-regression. Estimation methods available are restricted maximum likelihood, ...maximum likelihood, method of moments, and fixed effects. The program also allows a wider range of models (Riley's overall correlation model and structured between-studies covariance); better estimation (using Mata for speed and correctly allowing for missing data); and new postestimation facilities (I-squared, standard errors and confidence intervals for between-studies standard deviations and correlations, and identification of the best intervention). The program is illustrated using a multiple-treatments meta-analysis.
Missing data is a common occurrence in clinical research. Missing data occurs when the value of the variables of interest are not measured or recorded for all subjects in the sample. Common ...approaches to addressing the presence of missing data include complete-case analyses, where subjects with missing data are excluded, and mean-value imputation, where missing values are replaced with the mean value of that variable in those subjects for whom it is not missing. However, in many settings, these approaches can lead to biased estimates of statistics (eg, of regression coefficients) and/or confidence intervals that are artificially narrow. Multiple imputation (MI) is a popular approach for addressing the presence of missing data. With MI, multiple plausible values of a given variable are imputed or filled in for each subject who has missing data for that variable. This results in the creation of multiple completed data sets. Identical statistical analyses are conducted in each of these complete data sets and the results are pooled across complete data sets. We provide an introduction to MI and discuss issues in its implementation, including developing the imputation model, how many imputed data sets to create, and addressing derived variables. We illustrate the application of MI through an analysis of data on patients hospitalised with heart failure. We focus on developing a model to estimate the probability of 1-year mortality in the presence of missing data. Statistical software code for conducting MI in R, SAS, and Stata are provided.
Les données manquantes sont un phénomène courant dans le domaine de la recherche clinique, qui survient lorsque les résultats pour des variables d'intérêt ne sont pas mesurés ou consignés pour tous les sujets d'un échantillon. Les approches courantes adoptées pour pallier les données manquantes comprennent les analyses de cas complètes, dans lesquelles tous les sujets pour lesquels des données sont manquantes sont exclus de l'analyse, et l'imputation par la moyenne, dans laquelle les valeurs manquantes sont remplacées par la valeur moyenne rapportée pour cette variable chez les sujets chez lesquels ces résultats ont été recueillis. Toutefois, dans de nombreux contextes, ces approches peuvent donner lieu à des estimations biaisées des statistiques (p. ex. des coefficients de régression) ou à des intervalles de confiance artificiellement étroits. L'imputation multiple est une approche populaire pour remédier aux données manquantes. Selon cette méthode, des valeurs plausibles multiples pour une variable donnée sont attribuées ou imputées pour chacun des sujets pour lesquels les résultats pour ladite variable sont manquants. Il en résulte la création de multiples groupes de données complètes. Des analyses statistiques identiques sont effectuées à partir de chacun de ces groupes de données complètes, et les résultats sont regroupés pour les différents groupes de données complètes. Cet article offre une introduction à l'imputation multiple, et aborde les difficultés liées à son utilisation, notamment l’élaboration du modèle d'imputation, le nombre de groupes de données imputables à créer, et les variables dérivées qui doivent être considérées. L'application de l'imputation multiple sera illustrée au moyen d'une analyse des données pour des patients hospitalisés atteints d'insuffisance cardiaque. Le modèle suggéré aura pour objectif d'estimer la probabilité de mortalité à 1 an en présence de données manquantes. Les codes pour les logiciels statistiques utilisés pour l'imputation multiple (R, SAS et Stata) sont fournis.
Summary Background Fetal growth restriction is a major determinant of adverse perinatal outcome. Screening procedures for fetal growth restriction need to identify small babies and then differentiate ...between those that are healthy and those that are pathologically small. We sought to determine the diagnostic effectiveness of universal ultrasonic fetal biometry in the third trimester as a screening test for small-for-gestational-age (SGA) infants, and whether the risk of morbidity associated with being small differed in the presence or absence of ultrasonic markers of fetal growth restriction. Methods The Pregnancy Outcome Prediction (POP) study was a prospective cohort study of nulliparous women with a viable singleton pregnancy at the time of the dating ultrasound scan. Women participating had clinically indicated ultrasonography in the third trimester as per routine clinical care and these results were reported as usual (selective ultrasonography). Additionally, all participants had research ultrasonography, including fetal biometry at 28 and 36 weeks' gestational age. These results were not made available to participants or treating clinicians (universal ultrasonography). We regarded SGA as a birthweight of less than the 10th percentile for gestational age and screen positive for SGA an ultrasonographic estimated fetal weight of less than the 10th percentile for gestational age. Markers of fetal growth restriction included biometric ratios, utero-placental Doppler, and fetal growth velocity. We assessed outcomes for consenting participants who attended research scans and had a livebirth at the Rosie Hospital (Cambridge, UK) after the 28 weeks' research scan. Findings Between Jan 14, 2008, and July 31, 2012, 4512 women provided written informed consent of whom 3977 (88%) were eligible for analysis. Sensitivity for detection of SGA infants was 20% (95% CI 15–24; 69 of 352 fetuses) for selective ultrasonography and 57% (51–62; 199 of 352 fetuses) for universal ultrasonography (relative sensitivity 2·9, 95% CI 2·4–3·5, p<0·0001). Of the 3977 fetuses, 562 (14·1%) were identified by universal ultrasonography with an estimated fetal weight of less than the 10th percentile and were at an increased risk of neonatal morbidity (relative risk RR 1·60, 95% CI 1·22–2·09, p=0·0012). However, estimated fetal weight of less than the 10th percentile was only associated with the risk of neonatal morbidity (pinteraction =0·005) if the fetal abdominal circumference growth velocity was in the lowest decile (RR 3·9, 95% CI 1·9–8·1, p=0·0001). 172 (4%) of 3977 pregnancies had both an estimated fetal weight of less than the 10th percentile and abdominal circumference growth velocity in the lowest decile, and had a relative risk of delivering an SGA infant with neonatal morbidity of 17·6 (9·2–34·0, p<0·0001). Interpretation Screening of nulliparous women with universal third trimester fetal biometry roughly tripled detection of SGA infants. Combined analysis of fetal biometry and fetal growth velocity identified a subset of SGA fetuses that were at increased risk of neonatal morbidity. Funding National Institute for Health Research, Medical Research Council, Sands, and GE Healthcare.