Objective
To estimate the effect of estrogen‐only and combined hormone replacement therapy (HRT) on the hazards of overall and age‐specific all‐cause mortality in healthy women aged 46–65 at first ...prescription.
Design
Matched cohort study.
Setting
Electronic primary care records from The Health Improvement Network (THIN) database, UK (1984−2017).
Population
105 199 HRT users (cases) and 224 643 non‐users (controls) matched on age and general practice.
Methods
Weibull‐Double‐Cox regression models adjusted for age at first treatment, birth cohort, type 2 diabetes, hypertension and hypertension treatment, coronary heart disease, oophorectomy, hysterectomy, body mass index, smoking and deprivation status.
Main outcome measures
All‐cause mortality.
Results
A total of 21 751 women died over an average of 13.5 years follow‐up per participant, of whom 6329 were users and 15 422 non‐users. The adjusted hazard ratio (HR) of overall all‐cause mortality in combined HRT users was 0.91 (95% CI 0.88−0.94), and in estrogen‐only users was 0.99 (0.93−1.07), compared with non‐users. Age‐specific adjusted HRs for participants aged 46–50, 51–55, 56–60 and 61–65 years at first treatment were 0.98 (0.92−1.04), 0.87 (0.82−0.92), 0.88 (0.82−0.93) and 0.92 (0.85−0.98) for combined HRT users compared with non‐users, and 1.01 (0.84−1.21), 1.03 (0.89−1.18), 0.98 (0.86−1.12) and 0.93 (0.81−1.07) for estrogen‐only users, respectively.
Conclusions
Combined HRT was associated with a 9% lower risk of all‐cause mortality and estrogen‐only formulation was not associated with any significant changes.
Tweetable
Estrogen‐only HRT is not associated with all‐cause mortality and combined HRT reduces the risks.
Tweetable
Estrogen‐only HRT is not associated with all‐cause mortality and combined HRT reduces the risks.
Full text
Available for:
BFBNIB, FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
Systematic reviews and meta-analyses of binary outcomes are widespread in all areas of application. The odds ratio, in particular, is by far the most popular effect measure. However, the standard ...meta-analysis of odds ratios using a random-effects model has a number of potential problems. An attractive alternative approach for the meta-analysis of binary outcomes uses a class of generalized linear mixed models (GLMMs). GLMMs are believed to overcome the problems of the standard random-effects model because they use a correct binomial-normal likelihood. However, this belief is based on theoretical considerations, and no sufficient simulations have assessed the performance of GLMMs in meta-analysis. This gap may be due to the computational complexity of these models and the resulting considerable time requirements.
The present study is the first to provide extensive simulations on the performance of four GLMM methods (models with fixed and random study effects and two conditional methods) for meta-analysis of odds ratios in comparison to the standard random effects model.
In our simulations, the hypergeometric-normal model provided less biased estimation of the heterogeneity variance than the standard random-effects meta-analysis using the restricted maximum likelihood (REML) estimation when the data were sparse, but the REML method performed similarly for the point estimation of the odds ratio, and better for the interval estimation.
It is difficult to recommend the use of GLMMs in the practice of meta-analysis. The problem of finding uniformly good methods of the meta-analysis for binary outcomes is still open.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Methods for random‐effects meta‐analysis require an estimate of the between‐study variance, τ2. The performance of estimators of τ2 (measured by bias and coverage) affects their usefulness in ...assessing heterogeneity of study‐level effects and also the performance of related estimators of the overall effect. However, as we show, the performance of the methods varies widely among effect measures. For the effect measures mean difference (MD) and standardized MD (SMD), we use improved effect‐measure‐specific approximations to the expected value of Q for both MD and SMD to introduce two new methods of point estimation of τ2 for MD (Welch‐type and corrected DerSimonian‐Laird) and one WT interval method. We also introduce one point estimator and one interval estimator for τ2 in SMD. Extensive simulations compare our methods with four point estimators of τ2 (the popular methods of DerSimonian‐Laird, restricted maximum likelihood, and Mandel and Paule, and the less‐familiar method of Jackson) and four interval estimators for τ2 (profile likelihood, Q‐profile, Biggerstaff and Jackson, and Jackson). We also study related point and interval estimators of the overall effect, including an estimator whose weights use only study‐level sample sizes. We provide measure‐specific recommendations from our comprehensive simulation study and discuss an example.
Full text
Available for:
BFBNIB, FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
Contemporary statistical publications rely on simulation to evaluate performance of new methods and compare them with established methods. In the context of random-effects meta-analysis of ...log-odds-ratios, we investigate how choices in generating data affect such conclusions. The choices we study include the overall log-odds-ratio, the distribution of probabilities in the control arm, and the distribution of study-level sample sizes. We retain the customary normal distribution of study-level effects. To examine the impact of the components of simulations, we assess the performance of the best available inverse–variance–weighted two-stage method, a two-stage method with constant sample-size-based weights, and two generalized linear mixed models. The results show no important differences between fixed and random sample sizes. In contrast, we found differences among data-generation models in estimation of heterogeneity variance and overall log-odds-ratio. This sensitivity to design poses challenges for use of simulation in choosing methods of meta-analysis.
Full text
Available for:
NUK, OILJ, SAZU, UKNU, UL, UM, UPUK
Cochran's Q statistic is routinely used for testing heterogeneity in meta‐analysis. Its expected value is also used in several popular estimators of the between‐study variance, τ2. Those applications ...generally have not considered the implications of its use of estimated variances in the inverse‐variance weights. Importantly, those weights make approximating the distribution of Q (more explicitly, QIV) rather complicated. As an alternative, we investigate a new Q statistic, QF, whose constant weights use only the studies' effective sample sizes. For the standardized mean difference as the measure of effect, we study, by simulation, approximations to distributions of QIV and QF, as the basis for tests of heterogeneity and for new point and interval estimators of τ2. These include new DerSimonian–Kacker‐type moment estimators based on the first moment of QF, and novel median‐unbiased estimators. The results show that: an approximation based on an algorithm of Farebrother follows both the null and the alternative distributions of QF reasonably well, whereas the usual chi‐squared approximation for the null distribution of QIV and the Biggerstaff–Jackson approximation to its alternative distribution are poor; in estimating τ2, our moment estimator based on QF is almost unbiased, the Mandel – Paule estimator has some negative bias in some situations, and the DerSimonian–Laird and restricted maximum likelihood estimators have considerable negative bias; and all 95% interval estimators have coverage that is too high when τ2=0, but otherwise the Q‐profile interval performs very well.
Full text
Available for:
BFBNIB, DOBA, FZAB, GIS, IJS, IZUM, KILJ, NLZOH, NUK, OILJ, PILJ, PNG, SAZU, SBCE, SBMB, SIK, UILJ, UKNU, UL, UM, UPUK
For outcomes that studies report as the means in the treatment and control groups, some medical applications and nearly half of meta-analyses in ecology express the effect as the ratio of means ...(RoM), also called the response ratio (RR), analyzed in the logarithmic scale as the log-response-ratio, LRR.
In random-effects meta-analysis of LRR, with normal and lognormal data, we studied the performance of estimators of the between-study variance, τ
, (measured by bias and coverage) in assessing heterogeneity of study-level effects, and also the performance of related estimators of the overall effect in the log scale, λ. We obtained additional empirical evidence from two examples.
The results of our extensive simulations showed several challenges in using LRR as an effect measure. Point estimators of τ
had considerable bias or were unreliable, and interval estimators of τ
seldom had the intended 95% coverage for small to moderate-sized samples (n<40). Results for estimating λ differed between lognormal and normal data.
For lognormal data, we can recommend only SSW, a weighted average in which a study's weight is proportional to its effective sample size, (when n≥40) and its companion interval (when n≥10). Normal data posed greater challenges. When the means were far enough from 0 (more than one standard deviation, 4 in our simulations), SSW was practically unbiased, and its companion interval was the only option.
Full text
Available for:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
In random‐effects meta‐analysis the between‐study variance (
τ2) has a key role in assessing heterogeneity of study‐level estimates and combining them to estimate an overall effect. For odds ratios ...the most common methods suffer from bias in estimating
τ2 and the overall effect and produce confidence intervals with below‐nominal coverage. An improved approximation to the moments of Cochran's Q statistic, suggested by Kulinskaya and Dollinger (KD), yields new point and interval estimators of
τ2 and of the overall log‐odds‐ratio. Another, simpler approach (SSW) uses weights based only on study‐level sample sizes to estimate the overall effect. In extensive simulations we compare our proposed estimators with established point and interval estimators for
τ2 and point and interval estimators for the overall log‐odds‐ratio (including the Hartung‐Knapp‐Sidik‐Jonkman interval). Additional simulations included three estimators based on generalized linear mixed models and the Mantel‐Haenszel fixed‐effect estimator. Results of our simulations show that no single point estimator of
τ2 can be recommended exclusively, but Mandel‐Paule and KD provide better choices for small and large numbers of studies, respectively. The KD estimator provides reliable coverage of
τ2. Inverse‐variance‐weighted estimators of the overall effect are substantially biased, as are the Mantel‐Haenszel odds ratio and the estimators from the generalized linear mixed models. The SSW estimator of the overall effect and a related confidence interval provide reliable point and interval estimation of the overall log‐odds‐ratio.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
For meta‐analysis of studies that report outcomes as binomial proportions, the most popular measure of effect is the odds ratio (OR), usually analyzed as log(OR). Many meta‐analyses use the risk ...ratio (RR) and its logarithm because of its simpler interpretation. Although log(OR) and log(RR) are both unbounded, use of log(RR) must ensure that estimates are compatible with study‐level event rates in the interval (0, 1). These complications pose a particular challenge for random‐effects models, both in applications and in generating data for simulations. As background, we review the conventional random‐effects model and then binomial generalized linear mixed models (GLMMs) with the logit link function, which do not have these complications. We then focus on log‐binomial models and explore implications of using them; theoretical calculations and simulation show evidence of biases. The main competitors to the binomial GLMMs use the beta‐binomial (BB) distribution, either in BB regression or by maximizing a BB likelihood; a simulation produces mixed results. Two examples and an examination of Cochrane meta‐analyses that used RR suggest bias in the results from the conventional inverse‐variance–weighted approach. Finally, we comment on other measures of effect that have range restrictions, including risk difference, and outline further research.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK
The conventional Q statistic, using estimated inverse‐variance (IV) weights, underlies a variety of problems in random‐effects meta‐analysis. In previous work on standardized mean difference and ...log‐odds‐ratio, we found superior performance with an estimator of the overall effect whose weights use only group‐level sample sizes. The Q statistic with those weights has the form proposed by DerSimonian and Kacker. The distribution of this Q and the Q with IV weights must generally be approximated. We investigate approximations for those distributions, as a basis for testing and estimating the between‐study variance (τ2). A simulation study, with mean difference as the effect measure, provides a framework for assessing accuracy of the approximations, level and power of the tests, and bias in estimating τ2. Two examples illustrate estimation of τ2 and the overall mean difference. Use of Q with sample‐size‐based weights and its exact distribution (available for mean difference and evaluated by Farebrother's algorithm) provides precise levels even for very small and unbalanced sample sizes. The corresponding estimator of τ2 is almost unbiased for 10 or more small studies. This performance compares favorably with the extremely liberal behavior of the standard tests of heterogeneity and the largely biased estimators based on inverse‐variance weights.
Full text
Available for:
FZAB, GIS, IJS, KILJ, NLZOH, NUK, OILJ, SAZU, SBCE, SBMB, UL, UM, UPUK