Mobile apps for mental health have the potential to overcome access barriers to mental health care, but there is little information on whether patients use the interventions as intended and the ...impact they have on mental health outcomes.
The objective of our study was to document and compare use patterns and clinical outcomes across the United States between 3 different self-guided mobile apps for depression.
Participants were recruited through Web-based advertisements and social media and were randomly assigned to 1 of 3 mood apps. Treatment and assessment were conducted remotely on each participant's smartphone or tablet with minimal contact with study staff. We enrolled 626 English-speaking adults (≥18 years old) with mild to moderate depression as determined by a 9-item Patient Health Questionnaire (PHQ-9) score ≥5, or if their score on item 10 was ≥2. The apps were (1) Project: EVO, a cognitive training app theorized to mitigate depressive symptoms by improving cognitive control, (2) iPST, an app based on an evidence-based psychotherapy for depression, and (3) Health Tips, a treatment control. Outcomes were scores on the PHQ-9 and the Sheehan Disability Scale. Adherence to treatment was measured as number of times participants opened and used the apps as instructed.
We randomly assigned 211 participants to iPST, 209 to Project: EVO, and 206 to Health Tips. Among the participants, 77.0% (482/626) had a PHQ-9 score >10 (moderately depressed). Among the participants using the 2 active apps, 57.9% (243/420) did not download their assigned intervention app but did not differ demographically from those who did. Differential treatment effects were present in participants with baseline PHQ-9 score >10, with the cognitive training and problem-solving apps resulting in greater effects on mood than the information control app (χ22=6.46, P=.04).
Mobile apps for depression appear to have their greatest impact on people with more moderate levels of depression. In particular, an app that is designed to engage cognitive correlates of depression had the strongest effect on depressed mood in this sample. This study suggests that mobile apps reach many people and are useful for more moderate levels of depression.
Clinicaltrials.gov NCT00540865; https://www.clinicaltrials.gov/ct2/show/NCT00540865 (Archived by WebCite at http://www.webcitation.org/6mj8IPqQr).
The predictive accuracy of a survival model can be summarized using extensions of the proportion of variation explained by the model, or R2, commonly used for continuous response models, or using ...extensions of sensitivity and specificity, which are commonly used for binary response models. In this article we propose new time-dependent accuracy summaries based on time-specific versions of sensitivity and specificity calculated over risk sets. We connect the accuracy summaries to a previously proposed global concordance measure, which is a variant of Kendall's tau. In addition, we show how standard Cox regression output can be used to obtain estimates of time-dependent sensitivity and specificity, and time-dependent receiver operating characteristic (ROC) curves. Semiparametric estimation methods appropriate for both proportional and nonproportional hazards data are introduced, evaluated in simulations, and illustrated using two familiar survival data sets.
IMPORTANCE: Diabetic kidney disease is the leading cause of chronic and end-stage kidney disease in the United States and worldwide. Changes in demographics and treatments may affect the prevalence ...and clinical manifestations of diabetic kidney disease. OBJECTIVE: To characterize the clinical manifestations of kidney disease among US adults with diabetes over time. DESIGN, SETTING, AND PARTICIPANTS: Serial cross-sectional studies of adults aged 20 years or older with diabetes mellitus participating in National Health and Nutrition Examination Surveys from 1988 through 2014. EXPOSURES: Diabetes was defined as hemoglobin A1c greater than 6.5% or use of glucose-lowering medications. MAIN OUTCOMES AND MEASURES: Albuminuria (urine albumin-to-creatinine ratio ≥30 mg/g), macroalbuminuria (urine albumin-to-creatinine ratio ≥300 mg/g), reduced estimated glomerular filtration rate (eGFR <60 mL/min/1.73 m2), and severely reduced eGFR (<30 mL/min/1.73 m2), incorporating data on biological variability to estimate the prevalence of persistent abnormalities. RESULTS: There were 6251 adults with diabetes included (1431 from 1988-1994, 1443 from 1999-2004, 1280 from 2005-2008, and 2097 from 2009-2014). The prevalence of any diabetic kidney disease, defined as persistent albuminuria, persistent reduced eGFR, or both, did not significantly change over time from 28.4% (95% CI, 23.8%-32.9%) in 1988-1994 to 26.2% (95% CI, 22.6%-29.9%) in 2009-2014 (prevalence ratio, 0.95 95% CI, 0.86-1.06 adjusting for age, sex, and race/ethnicity; P = .39 for trend). However, the prevalence of albuminuria decreased progressively over time from 20.8% (95% CI, 16.3%-25.3%) in 1988-1994 to 15.9% (95% CI, 12.7%-19.0%) in 2009-2014 (adjusted prevalence ratio, 0.76 95% CI, 0.65-0.89; P < .001 for trend). In contrast, the prevalence of reduced eGFR increased from 9.2% (95% CI, 6.2%-12.2%) in 1988-1994 to 14.1% (95% CI, 11.3%-17.0%) in 2009-2014 (adjusted prevalence ratio, 1.61 95% CI, 1.33-1.95 comparing 2009-2014 with 1988-1994; P < .001 for trend), with a similar pattern for severely reduced eGFR (adjusted prevalence ratio, 2.86 95% CI, 1.38-5.91; P = .004 for trend). Significant heterogeneity in the temporal trend for albuminuria was noted by age (P = .049 for interaction) and race/ethnicity (P = .007 for interaction), with a decreasing prevalence of albuminuria observed only among adults younger than 65 years and non-Hispanic whites, whereas the prevalence of reduced GFR increased without significant differences by age or race/ethnicity. In 2009-2014, approximately 8.2 million adults with diabetes (95% CI, 6.5-9.9 million adults) had albuminuria, reduced eGFR, or both. CONCLUSIONS AND RELEVANCE: Among US adults with diabetes from 1988 to 2014, the overall prevalence of diabetic kidney disease did not change significantly, whereas the prevalence of albuminuria declined and the prevalence of reduced eGFR increased.
Many medical decisions involve the use of dynamic information collected on individual patients toward predicting likely transitions in their future health status. If accurate predictions are ...developed, then a prognostic model can identify patients at greatest risk for future adverse events and may be used clinically to define populations appropriate for targeted intervention. In practice, a prognostic model is often used to guide decisions at multiple time points over the course of disease, and classification performance (i.e., sensitivity and specificity) for distinguishing high-risk v. low-risk individuals may vary over time as an individual’s disease status and prognostic information change. In this tutorial, we detail contemporary statistical methods that can characterize the time-varying accuracy of prognostic survival models when used for dynamic decision making. Although statistical methods for evaluating prognostic models with simple binary outcomes are well established, methods appropriate for survival outcomes are less well known and require time-dependent extensions of sensitivity and specificity to fully characterize longitudinal biomarkers or models. The methods we review are particularly important in that they allow for appropriate handling of censored outcomes commonly encountered with event time data. We highlight the importance of determining whether clinical interest is in predicting cumulative (or prevalent) cases over a fixed future time interval v. predicting incident cases over a range of follow-up times and whether patient information is static or updated over time. We discuss implementation of time-dependent receiver operating characteristic approaches using relevant R statistical software packages. The statistical summaries are illustrated using a liver prognostic model to guide transplantation in primary biliary cirrhosis.
Digital technologies such as smartphones are transforming the way scientists conduct biomedical research. Several remotely conducted studies have recruited thousands of participants over a span of a ...few months allowing researchers to collect real-world data at scale and at a fraction of the cost of traditional research. Unfortunately, remote studies have been hampered by substantial participant attrition, calling into question the representativeness of the collected data including generalizability of outcomes. We report the findings regarding recruitment and retention from eight remote digital health studies conducted between 2014-2019 that provided individual-level study-app usage data from more than 100,000 participants completing nearly 3.5 million remote health evaluations over cumulative participation of 850,000 days. Median participant retention across eight studies varied widely from 2-26 days (median across all studies = 5.5 days). Survival analysis revealed several factors significantly associated with increase in participant retention time, including (i) referral by a clinician to the study (increase of 40 days in median retention time); (ii) compensation for participation (increase of 22 days, 1 study); (iii) having the clinical condition of interest in the study (increase of 7 days compared with controls); and (iv) older age (increase of 4 days). Additionally, four distinct patterns of daily app usage behavior were identified by unsupervised clustering, which were also associated with participant demographics. Most studies were not able to recruit a sample that was representative of the race/ethnicity or geographical diversity of the US. Together these findings can help inform recruitment and retention strategies to enable equitable participation of populations in future digital health research.
This JAMA Guide to Statistics and Methods article explains effect score analyses, an approach for evaluating the heterogeneity of treatment effects, and examines its use in a study of ...oxygen-saturation targets in critically ill patients.
In this randomized trial involving patients with osteoporotic vertebral compression fractures, patients who underwent vertebroplasty had improvements in pain and disability measures that were similar ...to those in patients who underwent a sham procedure.
Patients who underwent vertebroplasty had improvements in pain and disability measures that were similar to those in patients who underwent a sham procedure.
Spontaneous vertebral fractures are associated with pain, disability, and death in patients with osteoporosis. Percutaneous vertebroplasty, the injection of medical cement, or polymethylmethacrylate (PMMA), into the fractured vertebral body has gained widespread acceptance as an effective method of pain relief and has become routine therapy for osteoporotic vertebral fractures. Guidelines recommend vertebroplasty for fractures that have not responded to medical treatment.
1
Typically, the duration of such fractures ranges from several weeks to several months or longer for fractures that have not healed.
Numerous case series and several small, unblinded, nonrandomized, controlled studies have suggested the effectiveness of vertebroplasty in relieving . . .
Diabetes is an important cause of CKD. However, among people with diabetes, it is unclear to what extent CKD is attributable to diabetes itself versus comorbid conditions, such as advanced age and ...hypertension. We examined associations of diabetes with clinical manifestations of CKD independent of age and BP and the extent to which diabetes contributes to the overall prevalence of CKD in the United States.
We performed a cross-sectional study of 15,675 participants in the National Health and Nutrition Examination Surveys from 2009 to 2014. Diabetes was defined by use of glucose-lowering medications or hemoglobin A
≥6.5%. eGFR was calculated using the CKD Epidemiology Collaboration formula, and albumin-to-creatinine ratio was measured in single-void urine samples. We calculated the prevalence of CKD manifestations by diabetes status as well as prevalence ratios, differences in prevalence, and prevalence attributable to diabetes using binomial and linear regression, incorporating data from repeat eGFR and urine albumin-to-creatinine ratio measurements to estimate persistent disease.
For participants with diabetes (
=2279) versus those without diabetes (
=13,396), the estimated prevalence of any CKD (eGFR<60 ml/min per 1.73 m
; albumin-to-creatinine ratio ≥30 mg/g, or both) was 25% versus 5.3%, respectively; albumin-to-creatinine ratio ≥30 mg/g was 16% versus 3.0%, respectively; albumin-to-creatinine ratio ≥300 mg/g was 4.6% versus 0.3%, respectively; eGFR<60 ml/min per 1.73 m
was 12% versus 2.5%, respectively; and eGFR<30 ml/min per 1.73 m
was 2.4% versus 0.4%, respectively (each
<0.001). Adjusting for demographics and several aspects of BP, prevalence differences were 14.6% (
<0.001), 10.8% (
<0.001), 4.5% (
<0.001), 6.5% (
<0.001), and 1.8% (
=0.004), respectively. Approximately 24% (95% confidence interval, 19% to 29%) of CKD among all United States adults was attributable to diabetes after adjusting for demographics.
Diabetes is strongly associated with both albuminuria and reduced GFR independent of demographics and hypertension, contributing substantially to the burden of CKD in the United States.
The stepped wedge cluster randomized design has received increasing attention in pragmatic clinical trials and implementation science research. The key feature of the design is the unidirectional ...crossover of clusters from the control to intervention conditions on a staggered schedule, which induces confounding of the intervention effect by time. The stepped wedge design first appeared in the Gambia hepatitis study in the 1980s. However, the statistical model used for the design and analysis was not formally introduced until 2007 in an article by Hussey and Hughes. Since then, a variety of mixed-effects model extensions have been proposed for the design and analysis of these trials. In this article, we explore these extensions under a unified perspective. We provide a general model representation and regard various model extensions as alternative ways to characterize the secular trend, intervention effect, as well as sources of heterogeneity. We review the key model ingredients and clarify their implications for the design and analysis. The article serves as an entry point to the evolving statistical literatures on stepped wedge designs.
Although many individuals with chronic pain use analgesics, the methods used in many randomized controlled trials (RCTs) do not sufficiently account for confounding by differential post-randomization ...analgesic use. This may lead to underestimation of average treatment effects and diminished power. We introduce (1) a new measure-the Numeric Rating Scale of Underlying Pain without concurrent Analgesic use (NRS-UP(A))-which can shift the estimand of interest in an RCT to target effects of a treatment on pain intensity in the hypothetical situation where analgesic use was not occurring at the time of outcome assessment; and (2) a new pain construct-an individuals' perceived effect of analgesic use on pain intensity (EA). The NRS-UP(A) may be used as a secondary outcome in RCTs of point treatments or nonpharmacologic treatments. Among 662 adults with back pain in primary care, participants' mean value of the NRS-UP(A) among those using analgesics was 1.2 NRS points higher than their value on the conventional pain intensity NRS, reflecting a mean EA value of -1.2 NRS points and a perceived beneficial effect of analgesics. More negative values of EA (ie, greater perceived benefit) were associated with a greater number of analgesics used but not with pain intensity, analgesic type, or opioid dose. The NRS-UP(A) and EA were significantly associated with future analgesic use 6 months later, but the conventional pain NRS was not. Future research is needed to determine whether the NRS-UP(A), used as a secondary outcome may allow pain RCTs to target alternative estimands with clinical relevance.