Abstract Objective The study objective was to evaluate and update the safety data from randomized controlled trials of tumor necrosis factor inhibitors in patients treated for rheumatoid arthritis. ...Methods A systematic literature search was conducted from 1990 to May 2013. All studies included were randomized, double-blind, controlled trials of patients with rheumatoid arthritis that evaluated adalimumab, certolizumab pegol, etanercept, golimumab, or infliximab treatment. The serious adverse events and discontinuation rates were abstracted, and risk estimates were calculated by Peto odds ratios (ORs). Results Forty-four randomized controlled trials involving 11,700 subjects receiving tumor necrosis factor inhibitors and 5901 subjects receiving placebo or traditional disease-modifying antirheumatic drugs were included. Tumor necrosis factor inhibitor treatment as a group was associated with a higher risk of serious infection (OR, 1.42; 95% confidence interval CI, 1.13-1.78) and treatment discontinuation due to adverse events (OR, 1.23; 95% CI, 1.06-1.43) compared with placebo and traditional disease-modifying antirheumatic drug treatments. Specifically, patients taking adalimumab, certolizumab pegol, and infliximab had an increased risk of serious infection (OR, 1.69, 1.98, and 1.63, respectively) and showed an increased risk of discontinuation due to adverse events (OR, 1.38, 1.67, and 2.04, respectively). In contrast, patients taking etanercept had a decreased risk of discontinuation due to adverse events (OR, 0.72; 95% CI, 0.55-0.93). Although ORs for malignancy varied across the different tumor necrosis factor inhibitors, none reached statistical significance. Conclusions These meta-analysis updates of the comparative safety of tumor necrosis factor inhibitors suggest a higher risk of serious infection associated with adalimumab, certolizumab pegol, and infliximab, which seems to contribute to higher rates of discontinuation. In contrast, etanercept use showed a lower rate of discontinuation. These data may help guide clinical comparative decision making in the management of rheumatoid arthritis.
Abstract Models—mathematical frameworks that facilitate estimation of the consequences of health care decisions—have become essential tools for health technology assessment. Evolution of the methods ...since the first ISPOR Modeling Task Force reported in 2003 has led to a new Task Force, jointly convened with the Society for Medical Decision Making, and this series of seven articles presents the updated recommendations for best practices in conceptualizing models; implementing state-transition approaches, discrete event simulations, or dynamic transmission models; and dealing with uncertainty and validating and reporting models transparently. This overview article introduces the work of the Task Force, provides all the recommendations, and discusses some quandaries that require further elucidation. The audience for these articles includes those who build models, stakeholders who utilize their results, and, indeed, anyone concerned with the use of models to support decision making.
Abstract State-transition modeling is an intuitive, flexible, and transparent approach of computer-based decision-analytic modeling including both Markov model cohort simulation and individual-based ...(first-order Monte Carlo) microsimulation. Conceptualizing a decision problem in terms of a set of (health) states and transitions among these states, state-transition modeling is one of the most widespread modeling techniques in clinical decision analysis, health technology assessment, and health-economic evaluation. State-transition models have been used in many different populations and diseases, and their applications range from personalized health care strategies to public health programs. Most frequently, state-transition models are used in the evaluation of risk factor interventions, screening, diagnostic procedures, treatment strategies, and disease management programs. The goal of this article was to provide consensus-based guidelines for the application of state-transition models in the context of health care. We structured the best practice recommendations in the following sections: choice of model type (cohort vs. individual-level model), model structure, model parameters, analysis, reporting, and communication. In each of these sections, we give a brief description, address the issues that are of particular relevance to the application of state-transition models, give specific examples from the literature, and provide best practice recommendations for state-transition modeling. These recommendations are directed both to modelers and to users of modeling results such as clinicians, clinical guideline developers, manufacturers, or policymakers.
Modeling Good Research Practices—Overview Caro, J. Jaime; Briggs, Andrew H.; Siebert, Uwe ...
Medical decision making,
09/2012, Letnik:
32, Številka:
5
Journal Article
Recenzirano
Models—mathematical frameworks that facilitate estimation of the consequences of health care decisions—have become essential tools for health technology assessment. Evolution of the methods since the ...first ISPOR modeling task force reported in 2003 has led to a new task force, jointly convened with the Society for Medical Decision Making, and this series of seven papers presents the updated recommendations for best practices in conceptualizing models; implementing state–transition approaches, discrete event simulations, or dynamic transmission models; dealing with uncertainty; and validating and reporting models transparently. This overview introduces the work of the task force, provides all the recommendations, and discusses some quandaries that require further elucidation. The audience for these papers includes those who build models, stakeholders who utilize their results, and, indeed, anyone concerned with the use of models to support decision making.
Objectives
To estimate the cost of dementia and the extra cost of caring for someone with dementia over the cost of caring for someone without dementia.
Design
We developed an evidence‐based ...mathematical model to simulate disease progression for newly diagnosed individuals with dementia. Data‐driven trajectories of cognition, function, and behavioral and psychological symptoms were used to model disease progression and predict costs. Using modeling, we evaluated lifetime and annual costs of individuals with dementia, compared costs of those with and without clinical features of dementia, and evaluated the effect of reducing functional decline or behavioral and psychological symptoms by 10% for 12 months (implemented when Mini‐Mental State Examination score ≤21).
Setting
Mathematical model.
Participants
Representative simulated U.S. incident dementia cases.
Measurements
Value of informal care, out‐of‐pocket expenditures, Medicaid expenditures, and Medicare expenditures.
Results
From time of diagnosis (mean age 83), discounted total lifetime cost of care for a person with dementia was $321,780 (2015 dollars). Families incurred 70% of the total cost burden ($225,140), Medicaid accounted for 14% ($44,090), and Medicare accounted for 16% ($52,540). Costs for a person with dementia over a lifetime were $184,500 greater (86% incurred by families) than for someone without dementia. Total annual cost peaked at $89,000, and net cost peaked at $72,400. Reducing functional decline or behavioral and psychological symptoms by 10% resulted in $3,880 and $680 lower lifetime costs than natural disease progression.
Conclusion
Dementia substantially increases lifetime costs of care. Long‐lasting, effective interventions are needed to support families because they incur the most dementia cost.
IMPORTANCE: The US Preventive Services Task Force (USPSTF) is updating its 2008 colorectal cancer (CRC) screening recommendations. OBJECTIVE: To inform the USPSTF by modeling the benefits, burden, ...and harms of CRC screening strategies; estimating the optimal ages to begin and end screening; and identifying a set of model-recommendable strategies that provide similar life-years gained (LYG) and a comparable balance between LYG and screening burden. DESIGN, SETTING, AND PARTICIPANTS: Comparative modeling with 3 microsimulation models of a hypothetical cohort of previously unscreened US 40-year-olds with no prior CRC diagnosis. EXPOSURES: Screening with sensitive guaiac-based fecal occult blood testing, fecal immunochemical testing (FIT), multitarget stool DNA testing, flexible sigmoidoscopy with or without stool testing, computed tomographic colonography (CTC), or colonoscopy starting at age 45, 50, or 55 years and ending at age 75, 80, or 85 years. Screening intervals varied by modality. Full adherence for all strategies was assumed. MAIN OUTCOMES AND MEASURES: Life-years gained compared with no screening (benefit), lifetime number of colonoscopies required (burden), lifetime number of colonoscopy complications (harms), and ratios of incremental burden and benefit (efficiency ratios) per 1000 40-year-olds. RESULTS: The screening strategies provided LYG in the range of 152 to 313 per 1000 40-year-olds. Lifetime colonoscopy burden per 1000 persons ranged from fewer than 900 (FIT every 3 years from ages 55-75 years) to more than 7500 (colonoscopy screening every 5 years from ages 45-85 years). Harm from screening was at most 23 complications per 1000 persons screened. Strategies with screening beginning at age 50 years generally provided more LYG as well as more additional LYG per additional colonoscopy than strategies with screening beginning at age 55 years. There were limited empirical data to support a start age of 45 years. For persons adequately screened up to age 75 years, additional screening yielded small increases in LYG relative to the increase in colonoscopy burden. With screening from ages 50 to 75 years, 4 strategies yielded a comparable balance of screening burden and similar LYG (median LYG per 1000 across the models): colonoscopy every 10 years (270 LYG); sigmoidoscopy every 10 years with annual FIT (256 LYG); CTC every 5 years (248 LYG); and annual FIT (244 LYG). CONCLUSIONS AND RELEVANCE: In this microsimulation modeling study of a previously unscreened population undergoing CRC screening that assumed 100% adherence, the strategies of colonoscopy every 10 years, annual FIT, sigmoidoscopy every 10 years with annual FIT, and CTC every 5 years performed from ages 50 through 75 years provided similar LYG and a comparable balance of benefit and screening burden.
Contralateral prophylactic mastectomy (CPM) rates have substantially increased in recent years and may reflect an exaggerated perceived benefit from the procedure. The objective of this study was to ...evaluate the magnitude of the survival benefit of CPM for women with unilateral breast cancer.
We developed a Markov model to simulate survival outcomes after CPM and no CPM among women with stage I or II breast cancer without a BRCA mutation. Probabilities for developing contralateral breast cancer (CBC), dying from CBC, dying from primary breast cancer, and age-specific mortality rates were estimated from published studies. We estimated life expectancy (LE) gain, 20-year overall survival, and disease-free survival with each intervention strategy among cohorts of women defined by age, estrogen receptor (ER) status, and stage of cancer.
Predicted LE gain from CPM ranged from 0.13 to 0.59 years for women with stage I breast cancer and 0.08 to 0.29 years for those with stage II breast cancer. Absolute 20-year survival differences ranged from 0.56% to 0.94% for women with stage I breast cancer and 0.36% to 0.61% for women with stage II breast cancer. CPM was more beneficial among younger women, stage I, and ER-negative breast cancer. Sensitivity analyses yielded a maximum 20-year survival difference with CPM of only 1.45%.
The absolute 20-year survival benefit from CPM was less than 1% among all age, ER status, and cancer stage groups. Estimates of LE gains and survival differences derived from decision models may provide more realistic expectations of CPM.
IMPORTANCE: Since publication of the report by the Panel on Cost-Effectiveness in Health and Medicine in 1996, researchers have advanced the methods of cost-effectiveness analysis, and policy makers ...have experimented with its application. The need to deliver health care efficiently and the importance of using analytic techniques to understand the clinical and economic consequences of strategies to improve health have increased in recent years. OBJECTIVE: To review the state of the field and provide recommendations to improve the quality of cost-effectiveness analyses. The intended audiences include researchers, government policy makers, public health officials, health care administrators, payers, businesses, clinicians, patients, and consumers. DESIGN: In 2012, the Second Panel on Cost-Effectiveness in Health and Medicine was formed and included 2 co-chairs, 13 members, and 3 additional members of a leadership group. These members were selected on the basis of their experience in the field to provide broad expertise in the design, conduct, and use of cost-effectiveness analyses. Over the next 3.5 years, the panel developed recommendations by consensus. These recommendations were then reviewed by invited external reviewers and through a public posting process. FINDINGS: The concept of a “reference case” and a set of standard methodological practices that all cost-effectiveness analyses should follow to improve quality and comparability are recommended. All cost-effectiveness analyses should report 2 reference case analyses: one based on a health care sector perspective and another based on a societal perspective. The use of an “impact inventory,” which is a structured table that contains consequences (both inside and outside the formal health care sector), intended to clarify the scope and boundaries of the 2 reference case analyses is also recommended. This special communication reviews these recommendations and others concerning the estimation of the consequences of interventions, the valuation of health outcomes, and the reporting of cost-effectiveness analyses. CONCLUSIONS AND RELEVANCE: The Second Panel reviewed the current status of the field of cost-effectiveness analysis and developed a new set of recommendations. Major changes include the recommendation to perform analyses from 2 reference case perspectives and to provide an impact inventory to clarify included consequences.
In 2014, the Centers for Medicare and Medicaid Services (CMS) began covering a multitarget stool DNA (mtSDNA) test for colorectal cancer (CRC) screening of Medicare beneficiaries. In this study, we ...evaluated whether mtSDNA testing is a cost-effective alternative to other CRC screening strategies reimbursed by CMS, and if not, under what conditions it could be.
We use three independently-developed microsimulation models to simulate a cohort of previously unscreened US 65-year-olds who are screened with triennial mtSDNA testing, or one of six other reimbursed screening strategies. Main outcome measures are discounted life-years gained (LYG) and lifetime costs (CMS perspective), threshold reimbursement rates, and threshold adherence rates. Outcomes are expressed as the median and range across models.
Compared to no screening, triennial mtSDNA screening resulted in 82 (range: 79-88) LYG per 1,000 simulated individuals. This was more than for five-yearly sigmoidoscopy (80 (range: 71-89) LYG), but fewer than for every other simulated strategy. At its 2017 reimbursement rate of $512, mtSDNA was the most costly strategy, and even if adherence were 30% higher than with other strategies, it would not be a cost-effective alternative. At a substantially reduced reimbursement rate ($6-18), two models found that triennial mtSDNA testing was an efficient and potentially cost-effective screening option.
Compared to no screening, triennial mtSDNA screening reduces CRC incidence and mortality at acceptable costs. However, compared to nearly all other CRC screening strategies reimbursed by CMS it is less effective and considerably more costly, making it an inefficient screening option.
The U.S. Preventive Services Task Force requested a decision analysis to inform their update of recommendations for colorectal cancer screening.
To assess life-years gained and colonoscopy ...requirements for colorectal cancer screening strategies and identify a set of recommendable screening strategies.
Decision analysis using 2 colorectal cancer microsimulation models from the Cancer Intervention and Surveillance Modeling Network.
Derived from the literature.
U.S. average-risk 40-year-old population.
Societal.
Lifetime.
Fecal occult blood tests (FOBTs), flexible sigmoidoscopy, or colonoscopy screening beginning at age 40, 50, or 60 years and stopping at age 75 or 85 years, with screening intervals of 1, 2, or 3 years for FOBT and 5, 10, or 20 years for sigmoidoscopy and colonoscopy.
Number of life-years gained compared with no screening and number of colonoscopies and noncolonoscopy tests required.
Beginning screening at age 50 years was consistently better than at age 60. Decreasing the stop age from 85 to 75 years decreased life-years gained by 1% to 4%, whereas colonoscopy use decreased by 4% to 15%. Assuming equally high adherence, 4 strategies provided similar life-years gained: colonoscopy every 10 years, annual Hemoccult SENSA (Beckman Coulter, Fullerton, California) testing or fecal immunochemical testing, and sigmoidoscopy every 5 years with midinterval Hemoccult SENSA testing. Annual Hemoccult II and flexible sigmoidoscopy every 5 years alone were less effective.
The results were most sensitive to beginning screening at age 40 years.
The stop age for screening was based only on chronologic age.
The findings support colorectal cancer screening with the following: colonoscopy every 10 years, annual screening with a sensitive FOBT, or flexible sigmoidoscopy every 5 years with a midinterval sensitive FOBT from age 50 to 75 years.