IMPORTANCE: Diabetic kidney disease is the leading cause of chronic and end-stage kidney disease in the United States and worldwide. Changes in demographics and treatments may affect the prevalence ...and clinical manifestations of diabetic kidney disease. OBJECTIVE: To characterize the clinical manifestations of kidney disease among US adults with diabetes over time. DESIGN, SETTING, AND PARTICIPANTS: Serial cross-sectional studies of adults aged 20 years or older with diabetes mellitus participating in National Health and Nutrition Examination Surveys from 1988 through 2014. EXPOSURES: Diabetes was defined as hemoglobin A1c greater than 6.5% or use of glucose-lowering medications. MAIN OUTCOMES AND MEASURES: Albuminuria (urine albumin-to-creatinine ratio ≥30 mg/g), macroalbuminuria (urine albumin-to-creatinine ratio ≥300 mg/g), reduced estimated glomerular filtration rate (eGFR <60 mL/min/1.73 m2), and severely reduced eGFR (<30 mL/min/1.73 m2), incorporating data on biological variability to estimate the prevalence of persistent abnormalities. RESULTS: There were 6251 adults with diabetes included (1431 from 1988-1994, 1443 from 1999-2004, 1280 from 2005-2008, and 2097 from 2009-2014). The prevalence of any diabetic kidney disease, defined as persistent albuminuria, persistent reduced eGFR, or both, did not significantly change over time from 28.4% (95% CI, 23.8%-32.9%) in 1988-1994 to 26.2% (95% CI, 22.6%-29.9%) in 2009-2014 (prevalence ratio, 0.95 95% CI, 0.86-1.06 adjusting for age, sex, and race/ethnicity; P = .39 for trend). However, the prevalence of albuminuria decreased progressively over time from 20.8% (95% CI, 16.3%-25.3%) in 1988-1994 to 15.9% (95% CI, 12.7%-19.0%) in 2009-2014 (adjusted prevalence ratio, 0.76 95% CI, 0.65-0.89; P < .001 for trend). In contrast, the prevalence of reduced eGFR increased from 9.2% (95% CI, 6.2%-12.2%) in 1988-1994 to 14.1% (95% CI, 11.3%-17.0%) in 2009-2014 (adjusted prevalence ratio, 1.61 95% CI, 1.33-1.95 comparing 2009-2014 with 1988-1994; P < .001 for trend), with a similar pattern for severely reduced eGFR (adjusted prevalence ratio, 2.86 95% CI, 1.38-5.91; P = .004 for trend). Significant heterogeneity in the temporal trend for albuminuria was noted by age (P = .049 for interaction) and race/ethnicity (P = .007 for interaction), with a decreasing prevalence of albuminuria observed only among adults younger than 65 years and non-Hispanic whites, whereas the prevalence of reduced GFR increased without significant differences by age or race/ethnicity. In 2009-2014, approximately 8.2 million adults with diabetes (95% CI, 6.5-9.9 million adults) had albuminuria, reduced eGFR, or both. CONCLUSIONS AND RELEVANCE: Among US adults with diabetes from 1988 to 2014, the overall prevalence of diabetic kidney disease did not change significantly, whereas the prevalence of albuminuria declined and the prevalence of reduced eGFR increased.
Many medical decisions involve the use of dynamic information collected on individual patients toward predicting likely transitions in their future health status. If accurate predictions are ...developed, then a prognostic model can identify patients at greatest risk for future adverse events and may be used clinically to define populations appropriate for targeted intervention. In practice, a prognostic model is often used to guide decisions at multiple time points over the course of disease, and classification performance (i.e., sensitivity and specificity) for distinguishing high-risk v. low-risk individuals may vary over time as an individual’s disease status and prognostic information change. In this tutorial, we detail contemporary statistical methods that can characterize the time-varying accuracy of prognostic survival models when used for dynamic decision making. Although statistical methods for evaluating prognostic models with simple binary outcomes are well established, methods appropriate for survival outcomes are less well known and require time-dependent extensions of sensitivity and specificity to fully characterize longitudinal biomarkers or models. The methods we review are particularly important in that they allow for appropriate handling of censored outcomes commonly encountered with event time data. We highlight the importance of determining whether clinical interest is in predicting cumulative (or prevalent) cases over a fixed future time interval v. predicting incident cases over a range of follow-up times and whether patient information is static or updated over time. We discuss implementation of time-dependent receiver operating characteristic approaches using relevant R statistical software packages. The statistical summaries are illustrated using a liver prognostic model to guide transplantation in primary biliary cirrhosis.
Mobile apps for mental health have the potential to overcome access barriers to mental health care, but there is little information on whether patients use the interventions as intended and the ...impact they have on mental health outcomes.
The objective of our study was to document and compare use patterns and clinical outcomes across the United States between 3 different self-guided mobile apps for depression.
Participants were recruited through Web-based advertisements and social media and were randomly assigned to 1 of 3 mood apps. Treatment and assessment were conducted remotely on each participant's smartphone or tablet with minimal contact with study staff. We enrolled 626 English-speaking adults (≥18 years old) with mild to moderate depression as determined by a 9-item Patient Health Questionnaire (PHQ-9) score ≥5, or if their score on item 10 was ≥2. The apps were (1) Project: EVO, a cognitive training app theorized to mitigate depressive symptoms by improving cognitive control, (2) iPST, an app based on an evidence-based psychotherapy for depression, and (3) Health Tips, a treatment control. Outcomes were scores on the PHQ-9 and the Sheehan Disability Scale. Adherence to treatment was measured as number of times participants opened and used the apps as instructed.
We randomly assigned 211 participants to iPST, 209 to Project: EVO, and 206 to Health Tips. Among the participants, 77.0% (482/626) had a PHQ-9 score >10 (moderately depressed). Among the participants using the 2 active apps, 57.9% (243/420) did not download their assigned intervention app but did not differ demographically from those who did. Differential treatment effects were present in participants with baseline PHQ-9 score >10, with the cognitive training and problem-solving apps resulting in greater effects on mood than the information control app (χ22=6.46, P=.04).
Mobile apps for depression appear to have their greatest impact on people with more moderate levels of depression. In particular, an app that is designed to engage cognitive correlates of depression had the strongest effect on depressed mood in this sample. This study suggests that mobile apps reach many people and are useful for more moderate levels of depression.
Clinicaltrials.gov NCT00540865; https://www.clinicaltrials.gov/ct2/show/NCT00540865 (Archived by WebCite at http://www.webcitation.org/6mj8IPqQr).
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
Digital technologies such as smartphones are transforming the way scientists conduct biomedical research. Several remotely conducted studies have recruited thousands of participants over a span of a ...few months allowing researchers to collect real-world data at scale and at a fraction of the cost of traditional research. Unfortunately, remote studies have been hampered by substantial participant attrition, calling into question the representativeness of the collected data including generalizability of outcomes. We report the findings regarding recruitment and retention from eight remote digital health studies conducted between 2014-2019 that provided individual-level study-app usage data from more than 100,000 participants completing nearly 3.5 million remote health evaluations over cumulative participation of 850,000 days. Median participant retention across eight studies varied widely from 2-26 days (median across all studies = 5.5 days). Survival analysis revealed several factors significantly associated with increase in participant retention time, including (i) referral by a clinician to the study (increase of 40 days in median retention time); (ii) compensation for participation (increase of 22 days, 1 study); (iii) having the clinical condition of interest in the study (increase of 7 days compared with controls); and (iv) older age (increase of 4 days). Additionally, four distinct patterns of daily app usage behavior were identified by unsupervised clustering, which were also associated with participant demographics. Most studies were not able to recruit a sample that was representative of the race/ethnicity or geographical diversity of the US. Together these findings can help inform recruitment and retention strategies to enable equitable participation of populations in future digital health research.
This JAMA Guide to Statistics and Methods article explains effect score analyses, an approach for evaluating the heterogeneity of treatment effects, and examines its use in a study of ...oxygen-saturation targets in critically ill patients.
The stepped wedge cluster randomized design has received increasing attention in pragmatic clinical trials and implementation science research. The key feature of the design is the unidirectional ...crossover of clusters from the control to intervention conditions on a staggered schedule, which induces confounding of the intervention effect by time. The stepped wedge design first appeared in the Gambia hepatitis study in the 1980s. However, the statistical model used for the design and analysis was not formally introduced until 2007 in an article by Hussey and Hughes. Since then, a variety of mixed-effects model extensions have been proposed for the design and analysis of these trials. In this article, we explore these extensions under a unified perspective. We provide a general model representation and regard various model extensions as alternative ways to characterize the secular trend, intervention effect, as well as sources of heterogeneity. We review the key model ingredients and clarify their implications for the design and analysis. The article serves as an entry point to the evolving statistical literatures on stepped wedge designs.
Although many individuals with chronic pain use analgesics, the methods used in many randomized controlled trials (RCTs) do not sufficiently account for confounding by differential post-randomization ...analgesic use. This may lead to underestimation of average treatment effects and diminished power. We introduce (1) a new measure-the Numeric Rating Scale of Underlying Pain without concurrent Analgesic use (NRS-UP(A))-which can shift the estimand of interest in an RCT to target effects of a treatment on pain intensity in the hypothetical situation where analgesic use was not occurring at the time of outcome assessment; and (2) a new pain construct-an individuals' perceived effect of analgesic use on pain intensity (EA). The NRS-UP(A) may be used as a secondary outcome in RCTs of point treatments or nonpharmacologic treatments. Among 662 adults with back pain in primary care, participants' mean value of the NRS-UP(A) among those using analgesics was 1.2 NRS points higher than their value on the conventional pain intensity NRS, reflecting a mean EA value of -1.2 NRS points and a perceived beneficial effect of analgesics. More negative values of EA (ie, greater perceived benefit) were associated with a greater number of analgesics used but not with pain intensity, analgesic type, or opioid dose. The NRS-UP(A) and EA were significantly associated with future analgesic use 6 months later, but the conventional pain NRS was not. Future research is needed to determine whether the NRS-UP(A), used as a secondary outcome may allow pain RCTs to target alternative estimands with clinical relevance.
The predictive accuracy of a survival model can be summarized using extensions of the proportion of variation explained by the model, or R2, commonly used for continuous response models, or using ...extensions of sensitivity and specificity, which are commonly used for binary response models. In this article we propose new time-dependent accuracy summaries based on time-specific versions of sensitivity and specificity calculated over risk sets. We connect the accuracy summaries to a previously proposed global concordance measure, which is a variant of Kendall's tau. In addition, we show how standard Cox regression output can be used to obtain estimates of time-dependent sensitivity and specificity, and time-dependent receiver operating characteristic (ROC) curves. Semiparametric estimation methods appropriate for both proportional and nonproportional hazards data are introduced, evaluated in simulations, and illustrated using two familiar survival data sets.
Stepped wedge design is a popular research design that enables a rigorous evaluation of candidate interventions by using a staggered cluster randomization strategy. While analytical methods were ...developed for designing stepped wedge trials, the prior focus has been solely on testing for the average treatment effect. With a growing interest on formal evaluation of the heterogeneity of treatment effects across patient subpopulations, trial planning efforts need appropriate methods to accurately identify sample sizes or design configurations that can generate evidence for both the average treatment effect and variations in subgroup treatment effects. To fill in that important gap, this article derives novel variance formulas for confirmatory analyses of treatment effect heterogeneity, that are applicable to both cross‐sectional and closed‐cohort stepped wedge designs. We additionally point out that the same framework can be used for more efficient average treatment effect analyses via covariate adjustment, and allows the use of familiar power formulas for average treatment effect analyses to proceed. Our results further sheds light on optimal design allocations of clusters to maximize the weighted precision for assessing both the average and heterogeneous treatment effects. We apply the new methods to the Lumbar Imaging with Reporting of Epidemiology Trial, and carry out a simulation study to validate our new methods.
CONTEXT Diabetes is the leading cause of kidney disease in the developed world. Over time, the prevalence of diabetic kidney disease (DKD) may increase due to the expanding size of the diabetes ...population or decrease due to the implementation of diabetes therapies. OBJECTIVE To define temporal changes in DKD prevalence in the United States. DESIGN, SETTING, AND PARTICIPANTS
Cross-sectional analyses of the Third National Health and Nutrition Examination Survey (NHANES III) from 1988-1994 (N = 15 073), NHANES 1999-2004 (N = 13 045), and NHANES 2005-2008 (N = 9588). Participants with diabetes were defined by levels of hemoglobin A1c of 6.5% or greater, use of glucose-lowering medications, or both (n = 1431 in NHANES III; n = 1443 in NHANES 1999-2004; n = 1280 in NHANES 2005-2008).
MAIN OUTCOME MEASURES
Diabetic kidney disease was defined as diabetes with albuminuria (ratio of urine albumin to creatinine ≥30 mg/g), impaired glomerular filtration rate (<60 mL/min/1.73 m2 estimated using the Chronic Kidney Disease Epidemiology Collaboration formula), or both. Prevalence of albuminuria was adjusted to estimate persistent albuminuria.
RESULTS
The prevalence of DKD in the US population was 2.2% (95% confidence interval CI, 1.8%-2.6%) in NHANES III, 2.8% (95% CI, 2.4%-3.1%) in NHANES 1999-2004, and 3.3% (95% CI, 2.8%-3.7%) in NHANES 2005-2008 (P <.001 for trend). The prevalence of DKD increased in direct proportion to the prevalence of diabetes, without a change in the prevalence of DKD among those with diabetes. Among persons with diabetes, use of glucose-lowering medications increased from 56.2% (95% CI, 52.1%-60.4%) in NHANES III to 74.2% (95% CI, 70.4%-78.0%) in NHANES 2005-2008 (P <.001); use of renin-angiotensin-aldosterone system inhibitors increased from 11.2% (95% CI, 9.0%-13.4%) to 40.6% (95% CI, 37.2%-43.9%), respectively (P <.001); the prevalence of impaired glomerular filtration rate increased from 14.9% (95% CI, 12.1%-17.8%) to 17.7% (95% CI, 15.2%-20.2%), respectively (P = .03); and the prevalence of albuminuria decreased from 27.3% (95% CI, 22.0%-32.7%) to 23.7% (95% CI, 19.3%-28.0%), respectively, but this was not statistically significant (P = .07).
CONCLUSIONS Prevalence of DKD in the United States increased from 1988 to 2008 in proportion to the prevalence of diabetes. Among persons with diabetes, prevalence of DKD was stable despite increased use of glucose-lowering medications and renin-angiotensin-aldosterone system inhibitors.