Immune checkpoint inhibitors (ICI) targeting CTLA-4 and the PD-1/PD-L1 axis have shown unprecedented clinical activity in several types of cancer and are rapidly transforming the practice of medical ...oncology. Whereas cytotoxic chemotherapy and small molecule inhibitors ('targeted therapies') largely act on cancer cells directly, immune checkpoint inhibitors reinvigorate anti-tumour immune responses by disrupting co-inhibitory T-cell signalling. While resistance routinely develops in patients treated with conventional cancer therapies and targeted therapies, durable responses suggestive of long-lasting immunologic memory are commonly seen in large subsets of patients treated with ICI. However, initial response appears to be a binary event, with most non-responders to single-agent ICI therapy progressing at a rate consistent with the natural history of disease. In addition, late relapses are now emerging with longer follow-up of clinical trial populations, suggesting the emergence of acquired resistance. As robust biomarkers to predict clinical response and/or resistance remain elusive, the mechanisms underlying innate (primary) and acquired (secondary) resistance are largely inferred from pre-clinical studies and correlative clinical data. Improved understanding of molecular and immunologic mechanisms of ICI response (and resistance) may not only identify novel predictive and/or prognostic biomarkers, but also ultimately guide optimal combination/sequencing of ICI therapy in the clinic. Here we review the emerging clinical and pre-clinical data identifying novel mechanisms of innate and acquired resistance to immune checkpoint inhibition.
Maps that categorise the landscape into discrete units are a cornerstone of many scientific, management and conservation activities. The accuracy of these maps is often the primary piece of ...information used to make decisions about the mapping process or judge the quality of the final map. Variance is critical information when considering map accuracy, yet commonly reported accuracy metrics often do not provide that information. Various resampling frameworks have been proposed and shown to reconcile this issue, but have had limited uptake. In this paper, we compare the traditional approach of a single split of data into a training set (for classification) and test set (for accuracy assessment), to a resampling framework where the classification and accuracy assessment are repeated many times. Using a relatively simple vegetation mapping example and two common classifiers (maximum likelihood and random forest), we compare variance in mapped area estimates and accuracy assessment metrics (overall accuracy, kappa, user, producer, entropy, purity, quantity/allocation disagreement). Input field data points were repeatedly split into training and test sets via bootstrapping, Monte Carlo cross-validation (67:33 and 80:20 split ratios) and k-fold (5-fold) cross-validation. Additionally, within the cross-validation, we tested four designs: simple random, block hold-out, stratification by class, and stratification by both class and space. A classification was performed for every split of every methodological combination (100’s iterations each), creating sampling distributions for the mapped area of each class and the accuracy metrics. We found that regardless of resampling design, a single split of data into training and test sets results in a large variance in estimates of accuracy and mapped area. In the worst case, overall accuracy varied between ~40–80% in one resampling design, due only to random variation in partitioning into training and test sets. On the other hand, we found that all resampling procedures provided accurate estimates of error, and that they can also provide confidence intervals that are informative about the performance and uncertainty of the classifier. Importantly, we show that these confidence intervals commonly encompassed the magnitudes of increase or decrease in accuracy that are often cited in literature as justification for methodological or sampling design choices. We also show how a resampling approach enables generation of spatially continuous maps of classification uncertainty. Based on our results, we make recommendations about which resampling design to use and how it could be implemented. We also provide a fully worked mapping example, which includes traditional inference of uncertainty from the error matrix and provides examples for presenting the final map and its accuracy.
•Resampling designs are compared for image classification and accuracy assessment.•Resampling provides robust accuracy and area estimates with confidence intervals.•A single split into training/test data often gives inaccurate or misleading results.•Recommendations, examples and code are given for implementing resampling.
Abstract Background Lowering the diagnostic threshold for troponin is controversial because it may disproportionately increase the diagnosis of myocardial infarction in patients without acute ...coronary syndrome. We assessed the impact of lowering the diagnostic threshold of troponin on the incidence, management, and outcome of patients with type 2 myocardial infarction or myocardial injury. Methods Consecutive patients with elevated plasma troponin I concentrations (≥50 ng/L; n = 2929) were classified with type 1 (50%) myocardial infarction, type 2 myocardial infarction or myocardial injury (48%), and type 3 to 5 myocardial infarction (2%) before and after lowering the diagnostic threshold from 200 to 50 ng/L with a sensitive assay. Event-free survival from death and recurrent myocardial infarction was recorded at 1 year. Results Lowering the threshold increased the diagnosis of type 2 myocardial infarction or myocardial injury more than type 1 myocardial infarction (672 vs 257 additional patients, P < .001). Patients with myocardial injury or type 2 myocardial infarction were at higher risk of death compared with those with type 1 myocardial infarction (37% vs 16%; relative risk RR, 2.31; 95% confidence interval CI, 1.98-2.69) but had fewer recurrent myocardial infarctions (4% vs 12%; RR, 0.35; 95% CI, 0.26-0.49). In patients with troponin concentrations 50 to 199 ng/L, lowering the diagnostic threshold was associated with increased healthcare resource use ( P < .05) that reduced recurrent myocardial infarction and death for patients with type 1 myocardial infarction (31% vs 20%; RR, 0.64; 95% CI, 0.41-0.99), but not type 2 myocardial infarction or myocardial injury (36% vs 33%; RR, 0.93; 95% CI, 0.75-1.15). Conclusions After implementation of a sensitive troponin assay, the incidence of type 2 myocardial infarction or myocardial injury disproportionately increased and is now as frequent as type 1 myocardial infarction. Outcomes of patients with type 2 myocardial infarction or myocardial injury are poor and do not seem to be modifiable after reclassification despite substantial increases in healthcare resource use.
Pressure is one of the key variables that controls magmatic phase equilibria. However, estimating magma storage pressures from erupted products can be challenging. Various barometers have been ...developed over the past two decades that exploit the pressure-sensitive incorporation of jadeite (Jd) into clinopyroxene. These Jd-in-clinopyroxene barometers have been applied to rift zone magmas from Iceland, where published estimates of magma storage depths span the full thickness of the crust, and extend into the mantle. However, tests performed on commonly used clinopyroxene-liquid barometers with data from experiments on H2O-poor tholeiites in the 1 atm to 10 kbar range reveal substantial pressure-dependent inaccuracies, with some models overestimating pressures of experimental products equilibrated at 1 atm by up to 3 kbar. The pressures of closed-capsule experiments in the 1-5 kbar range are also overestimated, and such errors cannot be attributed to Na loss, as is the case in open furnace experiments. The following barometer was calibrated from experimental data in the 1 atm to 20 kbar range to improve the accuracy of Jd-in-clinopyroxene barometry at pressures relevant to magma storage in the crust: P(kbar) = -26.27+39.16T(K)/104ln XJdCpx/XNaO0.5liq XAlO1.5liq (XSiO2liq)2-4.22ln(XDiHdCpx)+78.43XAlO1.5liq+393.81(X NaO0.5liq XKO0.5liq)2. This new barometer accurately reproduces its calibration data with a standard error of estimate (SEE) of ±1.4 kbar, and is suitable for use on hydrous and anhydrous samples that are ultramafic to intermediate in composition, but should be used with caution below 1100 °C and at oxygen fugacities greater than one log unit above the QFM buffer. Tests performed using with data from experiments on H2O-poor tholeiites reveal that 1 atm runs were overestimated by less than the model precision (1.2 kbar); the new calibration is significantly more accurate than previous formulations. Many current estimates of magma storage pressures may therefore need to be reassessed. To this end, the new barometer was applied to numerous published clinopyroxene analyses from Icelandic rift zone tholeiites that were filtered to exclude compositions affected by poor analytical precision or collected from disequilibrium sector zones. Pressures and temperatures were then calculated using the new barometer in concert with Equation 33 from Putirka (2008) Putative equilibrium liquids were selected from a large database of Icelandic glass and whole-rock compositions using an iterative scheme because most clinopyroxene analyses were too primitive to be in equilibrium with their host glasses. High-Mg# clinopyroxenes from the highly primitive Borgarhraun eruption in north Iceland record a mean storage pressure in the lower crust (5.7 kbar). All other eruptions considered record mean pressures in the mid-crust, with primitive clinopyroxene populations recording slightly higher pressures (3.1-3.6 kbar) than evolved populations (2.6-2.8 kbar). Thus, while some magma processing takes place in the shallow crust immediately beneath Iceland's central volcanoes, magma evolution under the island's neovolcanic rift zones is dominated by mid-crustal processes.
An extensive high severity fire was a disaster for this swamp wallaby, but not its population, as many others detected the approaching fire and evaded its lethal heat.
Summary
Presence‐only records may provide data on the distributions of rare species, but commonly suffer from large, unknown biases due to their typically haphazard collection schemes. ...Presence–absence or count data collected in systematic, planned surveys are more reliable but typically less abundant.
We proposed a probabilistic model to allow for joint analysis of presence‐only and survey data to exploit their complementary strengths. Our method pools presence‐only and presence–absence data for many species and maximizes a joint likelihood, simultaneously estimating and adjusting for the sampling bias affecting the presence‐only data. By assuming that the sampling bias is the same for all species, we can borrow strength across species to efficiently estimate the bias and improve our inference from presence‐only data.
We evaluate our model's performance on data for 36 eucalypt species in south‐eastern Australia. We find that presence‐only records exhibit a strong sampling bias towards the coast and towards Sydney, the largest city. Our data‐pooling technique substantially improves the out‐of‐sample predictive performance of our model when the amount of available presence–absence data for a given species is scarce
If we have only presence‐only data and no presence–absence data for a given species, but both types of data for several other species that suffer from the same spatial sampling bias, then our method can obtain an unbiased estimate of the first species' geographic range.
Significance Recruitment, proliferation, and differentiation of myofibroblasts are common in many disease states. Mechanisms that regulate proliferation and differentiation are poorly understood, ...although TGF-β is a key inducer of differentiation. Here, we report, for the first time to our knowledge, that runt-related transcription factor 1 (RUNX1) regulates mesenchymal stem cell (MSC) biology and progenitor cell commitment to myofibroblasts. In this work, we describe the first identification, to our knowledge, of tissue-resident MSCs from adult normal human prostate gland and the role of these MSCs as myofibroblast precursors. We also pinpoint the role of RUNX1 in regulating proliferation and differentiation in both marrow-derived and tissue-resident MSCs. Perturbation of RUNX1 activity may provide insights for developing antifibrotic and anticancer therapies via targeting the reactive stroma microenvironment.
Myofibroblasts are a key cell type in wound repair, cardiovascular disease, and fibrosis and in the tumor-promoting microenvironment. The high accumulation of myofibroblasts in reactive stroma is predictive of the rate of cancer progression in many different tumors, yet the cell types of origin and the mechanisms that regulate proliferation and differentiation are unknown. We report here, for the first time to our knowledge, the characterization of normal human prostate-derived mesenchymal stem cells (MSCs) and the TGF-β1–regulated pathways that modulate MSC proliferation and myofibroblast differentiation. Human prostate MSCs combined with prostate cancer cells expressing TGF-β1 resulted in commitment to myofibroblasts. TGF-β1–regulated runt-related transcription factor 1 (RUNX1) was required for cell cycle progression and proliferation of progenitors. RUNX1 also inhibited, yet did not block, differentiation. Knockdown of RUNX1 in prostate or bone marrow-derived MSCs resulted in cell cycle arrest, attenuated proliferation, and constitutive differentiation to myofibroblasts. These data show that RUNX1 is a key transcription factor for MSC proliferation and cell fate commitment in myofibroblast differentiation. This work also shows that the normal human prostate gland contains tissue-derived MSCs that exhibit multilineage differentiation similar to bone marrow-derived MSCs. Targeting RUNX1 pathways may represent a therapeutic approach to affect myofibroblast proliferation and biology in multiple disease states.
Aldrich-McKelvey scaling is a powerful method that corrects for differential-item functioning (DIF) in estimating the positions of political stimuli (e.g., parties and candidates) and survey ...respondents along a latent policy dimension from issue scale data. DIF arises when respondents interpret issue scales (e.g., the standard liberal-conservative scale) differently and distort their placements of the stimuli and themselves. We develop a Bayesian implementation of the classical maximum likelihood Aldrich-McKelvey scaling method that overcomes some important shortcomings in the classical procedure. We then apply this method to study citizens' ideological preferences and perceptions using data from the 2004–2012 American National Election Studies and the 2010 Cooperative Congressional Election Study. Our findings indicate that DIF biases self-placements on the liberal-conservative scale in a way that understates the extent of polarization in the contemporary American electorate and that citizens have remarkably accurate perceptions of the ideological positions of senators and Senate candidates.
In a randomized trial, 294 patients with advanced heart failure were assigned to receive either a new centrifugal-flow pump or an axial-flow pump. At 6 months, the centrifugal-flow pump was ...associated with better outcomes.
A scarcity of effective therapeutic options for advanced heart failure has led to the development of durable mechanical circulatory support devices. Left ventricular assist devices, more accurately known as left ventricular assist systems, increase the rate of survival and improve quality of life among patients with advanced heart failure. However, these clinical benefits are balanced by an increased risk of infection, bleeding, neurologic events, and pump malfunction that is due principally to pump thrombosis.
1
,
2
As adoption of circulatory pumps has expanded, concerns about pump thrombosis have heightened. In 2013, two reports suggested that there has been an increase in . . .