Advances in cancer immunology have increased the use of immune checkpoint inhibitors in clinical practice, however not all patients respond, and treatment can have severe side-effects. Blood-based ...immunological biomarkers are an attractive method for predicting which patients will respond to therapy, however, reliable biomarkers for immune checkpoint blockade are lacking. This study aimed to identify patients before or early in treatment who would best respond to PD-1 inhibitors. We hypothesised that higher baseline PD-L1 and/or PD-1 on peripheral blood T cells could predict radiological response to PD-1 inhibitors. This pilot prospective cohort study assessed 26 patients with melanoma or non-small cell lung cancer, treated with pembrolizumab, nivolumab, or nivolumab/ipilimumab combined. Response was assessed by RECIST 1.1. Peripheral blood lymphocytes collected at baseline, after one cycle, 10 weeks and at discontinuation of therapy were analysed by flow cytometry. Patients with a higher proportion of PD-L1
T cells at baseline had improved objective response to PD-1 inhibitor therapy, and patients with a lower proportion of regulatory T cells at baseline experienced more immune-related adverse events. These findings may prove useful to assist in clinical decision making. Further studies with larger cohorts are required to validate these findings.
During solid organ transplantation, donor leukocytes, including myeloid cells, are transferred within the organ to the recipient. Both tolerogenic and alloreactive roles have been attributed to donor ...myeloid cells; however, their subset-specific retention posttransplantation has not been investigated in detail.
Major histocompatibility complex (MHC)-matched and mismatched liver transplants were performed in mice, and the fate of donor and recipient myeloid cells was assessed.
Following MHC-matched transplantation, a proportion of donor myeloid cells was retained in the graft, whereas others egressed and persisted in the blood, spleen, and bone marrow but not the lymph nodes. In contrast, after MHC-mismatched transplantation, all donor myeloid cells, except Kupffer cells, were depleted. This depletion was caused by recipient T and B cells because all donor myeloid subsets were retained in MHC-mismatched grafts when recipients lacked T and B cells. Recipient myeloid cells rapidly infiltrated MHC-matched and, to a greater extent, MHC-mismatched liver grafts. MHC-mismatched grafts underwent a transient rejection episode on day 7, coinciding with a transition in macrophages to a regulatory phenotype, after which rejection resolved.
Phenotypic and kinetic differences in the myeloid cell responses between MHC-matched and mismatched grafts were identified. A detailed understanding of the dynamics of immune responses to transplantation is critical to improving graft outcomes.
Systematic direct observation (SDO) is frequently used in schools to document student response to evidence-based interventions, determine eligibility for special education services, and provide ...objective data during high-stakes decisions. However, there are several limitations associated with this widely used data collection tool including a shortage of service providers available to implement it and the significant travel time required for itinerant personnel. Using videoconferencing (VC) software to aid in the implementation of SDO is an intuitive application of technology that stands to increase the feasibility and efficiency with which SDO can be utilized in research and practice. The purpose of this study was to evaluate the reliability and equivalence of the results generated from two modes of SDO, traditional in-vivo SDO and SDO conducted through VC software. The results suggest that VC SDO produces estimates of student on-task behavior that are practically equivalent (i.e., ±3%) to estimates generated through traditional SDO. Furthermore, two frequently used reliability indices indicate that VC SDO results are adequately reliable against traditional in-vivo SDO. Implications for school-based practice are discussed.
The present study investigated the effectiveness of a novel class‐wide intervention, the Classroom Password, for increasing the academic engaged behavior of middle school students. The effectiveness ...of an independent group contingency was evaluated using a concurrent multiple baseline design across three seventh‐ and eighth‐grade classrooms. Results indicated that the intervention was effective across all three classrooms in increasing students’ academic engagement, or on‐task behavior, as evidenced by visual analysis and moderate to large effect sizes. Decreases in disruptive behavior were also observed across all three classrooms. Off‐task behavior was not substantially affected in any of the three classrooms. The intervention received mixed ratings by the classroom teachers regarding its social validity. Results of the present study suggest that the Classroom Password may be an effective class‐wide intervention for increasing the academically engaged behavior and decreasing the disruptive behavior of middle school students during instructional time.
Direct behavior ratings (DBRs) have been proposed as an efficient method to assess student behavior in the classroom due to their relative ease of administration compared to alternative methods like ...systematic direct observation. DBRs are considered low‐inference assessments of behavior because they are designed to be completed immediately following a specified observation period of student behavior; however, in practice it is common for teachers and other respondents to delay completion of a DBR until they are reminded to do so. It is unclear what effect, if any, this latency between observation and DBR completion has on rater accuracy. Thus, the purpose of this study was to examine the effect of completion latency on accuracy in an analogue setting. Two‐hundred forty‐one undergraduate students (83.8% female) with a mean age of 21 participated across eight groups and were asked to complete an electronic DBR immediately after watching a video of student behavior or after a predetermined delay of 5 minutes, 15 minutes, 30 minutes, 1 hour, 2 hours, 4 hours, or 6 hours. A one‐way analysis of variance revealed that there was no statistically significant relationship between completion latency and DBR accuracy, F(7, 233) = .959, p = .46, η2 = .028.
Research based on single-case designs (SCD) are frequently utilized in educational settings to evaluate the effect of an intervention on student behavior. Visual analysis is the primary method of ...evaluation of SCD, despite research noting concerns regarding reliability of the procedure. Recent research suggests that characteristics of the graphic display may contribute to poor reliability and overestimation of intervention effects. This study investigated the effect of increasing or decreasing the data points per x- to y-axis ratio (DPPXYR) on rater evaluations of functional relation and effect size in SCD data sets. Twenty-nine individuals (58.6% male) with experience in SCD were asked to evaluate 40 multiple baseline data sets. Two data sets reporting null, small, moderate, and large intervention effects (8 total) were modified by manipulating the ratio of the x- to y-axis (5 variations), resulting in 40 total graphs. Results indicate that raters scored effects as larger as the DPPXYR decreased. Additionally, a 2-way within-subjects analysis of variance (ANOVA) revealed a significant main effect of DPPXYR manipulation on effect size rating, F(2.11, 58.98) = 58.05, p < .001, η2 = .675, and an interaction between DPPXYR manipulation and magnitude of effect, F(6.71, 187.78) = 11.45, p < .001, η2 = .29. Overall, results of the study indicate researchers and practitioners should maintain a DPPXYR of .14 or larger in the interest of more conservative effect size judgments.
Impact and Implications
This study suggests that raters are more confident that a functional relation is present and that an intervention effect is larger to the degree that the x- to y-axis ratio of graphically presented single-case data approaches 1:1. This finding highlights the need for standardization of graphical display of single-case design data.
Background. Adult mortality in the first 3 months on antiretroviral therapy (ART) is higher in low-income than in high-income countries, with more similar mortality after 6 months. However, the ...specific patterns of changing risk and causes of death have rarely been investigated in adults, nor compared with children in lowincome countries. Methods. We used flexible parametric hazard models to investigate how mortality risks varied over the first year on ART in human immunodeficiency virus-infected adults (aged 18-73 years) and children (aged 4 months to 15 years) in 2 trials in Zimbabwe and Uganda. Results. One hundred seventy-nine of 3316 (5.4%) adults and 39 of 1199 (3.3%) children died; half of adult/pediatrie deaths occurred in the first 3 months. Mortality variation over year 1 was similar; at all CD4 counts/CD4%, mortality risk was greatest between days 30 and 50, declined rapidly to day 180, then declined more slowly. One-year mortality after initiating ART with 0-49, 50-99 or > 100 CD4 cells/μL was 9.4%, 4.5%, and 2.9%, respectively, in adults, and 10.1%, 4.4%, and 1.3%, respectively, in children aged 4-15 years. Mortality in children aged 4 months to 3 years initiating ART in equivalent CD4% strata was also similar (0%-4%: 9.1%; 5%-9%: 4.5%; > 10%: 2.8%). Only 10 of 179 (6%) adult deaths and 1 of 39 (3%) child deaths were probably medication-related. The most common cause of death was septicemia/meningitis in adults (20%, median 76 days) and children (36%, median 79 days); pneumonia also commonly caused child deaths (28%, median 41 days). Conclusions. Children > 4 years and adults with low CD4 values have remarkably similar, and high, mortality risks in the first 3 months after ART initiation in low-income countries, similar to cohorts of untreated individuals. Bacterial infections are a major cause of death in both adults and children; targeted interventions could have important benefits.