IMPORTANCE: US health care spending has continued to increase and now accounts for 18% of the US economy, although little is known about how spending on each health condition varies by payer, and how ...these amounts have changed over time. OBJECTIVE: To estimate US spending on health care according to 3 types of payers (public insurance including Medicare, Medicaid, and other government programs, private insurance, or out-of-pocket payments) and by health condition, age group, sex, and type of care for 1996 through 2016. DESIGN AND SETTING: Government budgets, insurance claims, facility records, household surveys, and official US records from 1996 through 2016 were collected to estimate spending for 154 health conditions. Spending growth rates (standardized by population size and age group) were calculated for each type of payer and health condition. EXPOSURES: Ambulatory care, inpatient care, nursing care facility stay, emergency department care, dental care, and purchase of prescribed pharmaceuticals in a retail setting. MAIN OUTCOMES AND MEASURES: National spending estimates stratified by health condition, age group, sex, type of care, and type of payer and modeled for each year from 1996 through 2016. RESULTS: Total health care spending increased from an estimated $1.4 trillion in 1996 (13.3% of gross domestic product GDP; $5259 per person) to an estimated $3.1 trillion in 2016 (17.9% of GDP; $9655 per person); 85.2% of that spending was included in this study. In 2016, an estimated 48.0% (95% CI, 48.0%-48.0%) of health care spending was paid by private insurance, 42.6% (95% CI, 42.5%-42.6%) by public insurance, and 9.4% (95% CI, 9.4%-9.4%) by out-of-pocket payments. In 2016, among the 154 conditions, low back and neck pain had the highest amount of health care spending with an estimated $134.5 billion (95% CI, $122.4-$146.9 billion) in spending, of which 57.2% (95% CI, 52.2%-61.2%) was paid by private insurance, 33.7% (95% CI, 30.0%-38.4%) by public insurance, and 9.2% (95% CI, 8.3%-10.4%) by out-of-pocket payments. Other musculoskeletal disorders accounted for the second highest amount of health care spending (estimated at $129.8 billion 95% CI, $116.3-$149.7 billion) and most had private insurance (56.4% 95% CI, 52.6%-59.3%). Diabetes accounted for the third highest amount of the health care spending (estimated at $111.2 billion 95% CI, $105.7-$115.9 billion) and most had public insurance (49.8% 95% CI, 44.4%-56.0%). Other conditions estimated to have substantial health care spending in 2016 were ischemic heart disease ($89.3 billion 95% CI, $81.1-$95.5 billion), falls ($87.4 billion 95% CI, $75.0-$100.1 billion), urinary diseases ($86.0 billion 95% CI, $76.3-$95.9 billion), skin and subcutaneous diseases ($85.0 billion 95% CI, $80.5-$90.2 billion), osteoarthritis ($80.0 billion 95% CI, $72.2-$86.1 billion), dementias ($79.2 billion 95% CI, $67.6-$90.8 billion), and hypertension ($79.0 billion 95% CI, $72.6-$86.8 billion). The conditions with the highest spending varied by type of payer, age, sex, type of care, and year. After adjusting for changes in inflation, population size, and age groups, public insurance spending was estimated to have increased at an annualized rate of 2.9% (95% CI, 2.9%-2.9%); private insurance, 2.6% (95% CI, 2.6%-2.6%); and out-of-pocket payments, 1.1% (95% CI, 1.0%-1.1%). CONCLUSIONS AND RELEVANCE: Estimates of US spending on health care showed substantial increases from 1996 through 2016, with the highest increases in population-adjusted spending by public insurance. Although spending on low back and neck pain, other musculoskeletal disorders, and diabetes accounted for the highest amounts of spending, the payers and the rates of change in annual spending growth rates varied considerably.
Chaste - Cancer, Heart And Soft Tissue Environment - is an open source C++ library for the computational simulation of mathematical models developed for physiology and biology. Code development has ...been driven by two initial applications: cardiac electrophysiology and cancer development. A large number of cardiac electrophysiology studies have been enabled and performed, including high-performance computational investigations of defibrillation on realistic human cardiac geometries. New models for the initiation and growth of tumours have been developed. In particular, cell-based simulations have provided novel insight into the role of stem cells in the colorectal crypt. Chaste is constantly evolving and is now being applied to a far wider range of problems. The code provides modules for handling common scientific computing components, such as meshes and solvers for ordinary and partial differential equations (ODEs/PDEs). Re-use of these components avoids the need for researchers to 're-invent the wheel' with each new project, accelerating the rate of progress in new applications. Chaste is developed using industrially-derived techniques, in particular test-driven development, to ensure code quality, re-use and reliability. In this article we provide examples that illustrate the types of problems Chaste can be used to solve, which can be run on a desktop computer. We highlight some scientific studies that have used or are using Chaste, and the insights they have provided. The source code, both for specific releases and the development version, is available to download under an open source Berkeley Software Distribution (BSD) licence at http://www.cs.ox.ac.uk/chaste, together with details of a mailing list and links to documentation and tutorials.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Frailty has emerged as a powerful predictor of outcomes in patients with cirrhosis and has inevitably made its way into decision making within liver transplantation. In an effort to harmonize ...integration of the concept of frailty among transplant centers, the AST and ASTS supported the efforts of our working group to develop this statement from experts in the field. Frailty is a multidimensional construct that represents the end‐manifestation of derangements of multiple physiologic systems leading to decreased physiologic reserve and increased vulnerability to health stressors. In hepatology/liver transplantation, investigation of frailty has largely focused on physical frailty, which subsumes the concepts of functional performance, functional capacity, and disability. There was consensus that every liver transplant candidate should be assessed at baseline and longitudinally using a standardized frailty tool, which should guide the intensity and type of nutritional and physical therapy in individual liver transplant candidates. The working group agreed that frailty should not be used as the sole criterion for delisting a patient for liver transplantation, but rather should be considered one of many criteria when evaluating transplant candidacy and suitability. A road map to advance frailty in the clinical and research settings of liver transplantation is presented here.
This summary statement about frailty in liver transplantation addresses how to define and measure frailty, and how to incorporate frailty into the care of patients with end‐stage liver disease, including those awaiting liver transplantation.
Large and severe wildfires are an observable consequence of an increasingly arid American West. There is increasing consensus that human communities, land managers, and fire managers need to adapt ...and learn to live with wildfires. However, a myriad of human and ecological factors constrain adaptation, and existing science-based management strategies are not sufficient to address fire as both a problem and solution. To that end, we present a novel risk-science approach that aligns wildfire response decisions, mitigation opportunities, and land management objectives by consciously integrating social, ecological and fire management system needs. We use fire-prone landscapes of the US Pacific Northwest as our study area, and report on and describe how three complementary risk-based analytic tools-quantitative wildfire risk assessment, mapping of suppression difficulty, and atlases of potential control locations-can form the foundation for adaptive governance in fire management. Together, these tools integrate wildfire risk with fire management difficulties and opportunities, providing a more complete picture of the wildfire risk management challenge. Leveraging recent and ongoing experience integrating local experiential knowledge with these tools, we provide examples and discuss how these geospatial datasets create a risk-based planning structure that spans multiple spatial scales and uses. These uses include pre-planning strategic wildfire response, implementing safe wildfire response balancing risk with likelihood of success, and alignment of non-wildfire mitigation opportunities to support wildfire risk management more directly. We explicitly focus on multi-jurisdictional landscapes to demonstrate how these tools highlight the shared responsibility of wildfire risk mitigation. By integrating quantitative risk science, expert judgement and adaptive co-management, this process provides a much-needed pathway to transform fire-prone social ecological systems to be more responsive and adaptable to change and live with fire in an increasingly arid American West.
Many studies have examined how fuels, topography, climate, and fire weather influence fire severity. Less is known about how different forest management practices influence fire severity in ...multi-owner landscapes, despite costly and controversial suppression of wildfires that do not acknowledge ownership boundaries. In 2013, the Douglas Complex burned over 19,000 ha of Oregon & California Railroad (O&C) lands in Southwestern Oregon, USA. O&C lands are composed of a checkerboard of private industrial and federal forestland (Bureau of Land Management, BLM) with contrasting management objectives, providing a unique experimental landscape to understand how different management practices influence wildfire severity. Leveraging Landsat based estimates of fire severity (Relative differenced Normalized Burn Ratio, RdNBR) and geospatial data on fire progression, weather, topography, pre-fire forest conditions, and land ownership, we asked (1) what is the relative importance of different variables driving fire severity, and (2) is intensive plantation forestry associated with higher fire severity? Using Random Forest ensemble machine learning, we found daily fire weather was the most important predictor of fire severity, followed by stand age and ownership, followed by topographic features. Estimates of pre-fire forest biomass were not an important predictor of fire severity. Adjusting for all other predictor variables in a general least squares model incorporating spatial autocorrelation, mean predicted RdNBR was higher on private industrial forests (RdNBR 521.85 ± 18.67 mean ± SE) vs. BLM forests (398.87 ± 18.23) with a much greater proportion of older forests. Our findings suggest intensive plantation forestry characterized by young forests and spatially homogenized fuels, rather than pre-fire biomass, were significant drivers of wildfire severity. This has implications for perceptions of wildfire risk, shared fire management responsibilities, and developing fire resilience for multiple objectives in multi-owner landscapes.
As contemporary wildfire activity intensifies across the western United States, there is increasing recognition that a variety of forest management activities are necessary to restore ecosystem ...function and reduce wildfire hazard in dry forests. However, the pace and scale of current, active forest management is insufficient to address restoration needs. Managed wildfire and landscape-scale prescribed burns hold potential to achieve broad-scale goals but may not achieve desired outcomes where fire severity is too high or too low. To explore the potential for fire alone to restore dry forests, we developed a novel method to predict the range of fire severities most likely to restore historical forest basal area, density, and species composition in forests across eastern Oregon. First, we developed probabilistic tree mortality models for 24 species based on tree characteristics and remotely sensed fire severity from burned field plots. We applied these estimates to unburned stands in four national forests to predict post-fire conditions using multi-scale modeling in a Monte Carlo framework. We compared these results to historical reconstructions to identify fire severities with the highest restoration potential. Generally, we found basal area and density targets could be achieved by a relatively narrow range of moderate-severity fire (roughly 365-560 RdNBR). However, single fire events did not restore species composition in forests that were historically maintained by frequent, low-severity fire. Restorative fire severity ranges for stand basal area and density were strikingly similar for ponderosa pine (Pinus ponderosa) and dry mixed-conifer forests across a broad geographic range, in part due to relatively high fire tolerance of large grand (Abies grandis) and white fir (Abies concolor). Our results suggest historical forest conditions created by recurrent fire are not readily restored by single fires and landscapes have likely passed thresholds that preclude the effectiveness of managed wildfire alone as a restoration tool.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Purpose Venous thromboembolism (VTE) is common in patients with cancer. Long-term daily subcutaneous low molecular weight heparin has been standard treatment for such patients. The purpose of this ...study was to assess if an oral factor Xa inhibitor, rivaroxaban, would offer an alternative treatment for VTE in patients with cancer. Patient and Methods In this multicenter, randomized, open-label, pilot trial in the United Kingdom, patients with active cancer who had symptomatic pulmonary embolism (PE), incidental PE, or symptomatic lower-extremity proximal deep vein thrombosis (DVT) were recruited. Allocation was to dalteparin (200 IU/kg daily during month 1, then 150 IU/kg daily for months 2-6) or rivaroxaban (15 mg twice daily for 3 weeks, then 20 mg once daily for a total of 6 months). The primary outcome was VTE recurrence over 6 months. Safety was assessed by major bleeding and clinically relevant nonmajor bleeding (CRNMB). A sample size of 400 patients would provide estimates of VTE recurrence to within ± 4.5%, assuming a VTE recurrence rate at 6 months of 10%. Results A total of 203 patients were randomly assigned to each group, 58% of whom had metastases. Twenty-six patients experienced recurrent VTE (dalteparin, n = 18; rivaroxaban, n = 8). The 6-month cumulative VTE recurrence rate was 11% (95% CI, 7% to 16%) with dalteparin and 4% (95% CI, 2% to 9%) with rivaroxaban (hazard ratio HR, 0.43; 95% CI, 0.19 to 0.99). The 6-month cumulative rate of major bleeding was 4% (95% CI, 2% to 8%) for dalteparin and 6% (95% CI, 3% to 11%) for rivaroxaban (HR, 1.83; 95% CI, 0.68 to 4.96). Corresponding rates of CRNMB were 4% (95% CI, 2% to 9%) and 13% (95% CI, 9% to 19%), respectively (HR, 3.76; 95% CI, 1.63 to 8.69). Conclusion Rivaroxaban was associated with relatively low VTE recurrence but higher CRNMB compared with dalteparin.
This publication describes uniform definitions for cardiovascular and stroke outcomes developed by the Standardized Data Collection for Cardiovascular Trials Initiative and the US Food and Drug ...Administration (FDA). The FDA established the Standardized Data Collection for Cardiovascular Trials Initiative in 2009 to simplify the design and conduct of clinical trials intended to support marketing applications. The writing committee recognizes that these definitions may be used in other types of clinical trials and clinical care processes where appropriate. Use of these definitions at the FDA has enhanced the ability to aggregate data within and across medical product development programs, conduct meta-analyses to evaluate cardiovascular safety, integrate data from multiple trials, and compare effectiveness of drugs and devices. Further study is needed to determine whether prospective data collection using these common definitions improves the design, conduct, and interpretability of the results of clinical trials.
Machine learning for metabolic engineering: A review Lawson, Christopher E.; Martí, Jose Manuel; Radivojevic, Tijana ...
Metabolic engineering,
January 2021, 2021-01-00, 20210101, 2021-01-01, Letnik:
63, Številka:
C
Journal Article
Recenzirano
Odprti dostop
Machine learning provides researchers a unique opportunity to make metabolic engineering more predictable. In this review, we offer an introduction to this discipline in terms that are relatable to ...metabolic engineers, as well as providing in-depth illustrative examples leveraging omics data and improving production. We also include practical advice for the practitioner in terms of data management, algorithm libraries, computational resources, and important non-technical issues. A variety of applications ranging from pathway construction and optimization, to genetic editing optimization, cell factory testing, and production scale-up are discussed. Moreover, the promising relationship between machine learning and mechanistic models is thoroughly reviewed. Finally, the future perspectives and most promising directions for this combination of disciplines are examined.
•In this review, we offer an introduction to Machine learning in terms that are relatable to metabolic engineers.•We include practical advice for the practitioner in terms of data management, algorithm libraries, and computational resources.•A variety of applications ranging from pathway construction and optimization, to scale-up are discussed.•Finally, the future perspectives and most promising directions for this combination of disciplines are examined.
Machine learning has emerged as a novel tool for the efficient prediction of material properties, and claims have been made that machine-learned models for the formation energy of compounds can ...approach the accuracy of Density Functional Theory (DFT). The models tested in this work include five recently published compositional models, a baseline model using stoichiometry alone, and a structural model. By testing seven machine learning models for formation energy on stability predictions using the Materials Project database of DFT calculations for 85,014 unique chemical compositions, we show that while formation energies can indeed be predicted well, all compositional models perform poorly on predicting the stability of compounds, making them considerably less useful than DFT for the discovery and design of new solids. Most critically, in sparse chemical spaces where few stoichiometries have stable compounds, only the structural model is capable of efficiently detecting which materials are stable. The nonincremental improvement of structural models compared with compositional models is noteworthy and encourages the use of structural models for materials discovery, with the constraint that for any new composition, the ground-state structure is not known a priori. This work demonstrates that accurate predictions of formation energy do not imply accurate predictions of stability, emphasizing the importance of assessing model performance on stability predictions, for which we provide a set of publicly available tests.