Purpose To evaluate the efficacy of deep convolutional neural networks (DCNNs) for detecting tuberculosis (TB) on chest radiographs. Materials and Methods Four deidentified HIPAA-compliant datasets ...were used in this study that were exempted from review by the institutional review board, which consisted of 1007 posteroanterior chest radiographs. The datasets were split into training (68.0%), validation (17.1%), and test (14.9%). Two different DCNNs, AlexNet and GoogLeNet, were used to classify the images as having manifestations of pulmonary TB or as healthy. Both untrained and pretrained networks on ImageNet were used, and augmentation with multiple preprocessing techniques. Ensembles were performed on the best-performing algorithms. For cases where the classifiers were in disagreement, an independent board-certified cardiothoracic radiologist blindly interpreted the images to evaluate a potential radiologist-augmented workflow. Receiver operating characteristic curves and areas under the curve (AUCs) were used to assess model performance by using the DeLong method for statistical comparison of receiver operating characteristic curves. Results The best-performing classifier had an AUC of 0.99, which was an ensemble of the AlexNet and GoogLeNet DCNNs. The AUCs of the pretrained models were greater than that of the untrained models (P < .001). Augmenting the dataset further increased accuracy (P values for AlexNet and GoogLeNet were .03 and .02, respectively). The DCNNs had disagreement in 13 of the 150 test cases, which were blindly reviewed by a cardiothoracic radiologist, who correctly interpreted all 13 cases (100%). This radiologist-augmented approach resulted in a sensitivity of 97.3% and specificity 100%. Conclusion Deep learning with DCNNs can accurately classify TB at chest radiography with an AUC of 0.99. A radiologist-augmented approach for cases where there was disagreement among the classifiers further improved accuracy.
RSNA, 2017.
Idecabtagene vicleucel (ide-cel) is an autologous B-cell maturation antigen-directed chimeric antigen receptor T-cell therapy approved for relapsed/refractory multiple myeloma (RRMM) on the basis of ...the phase II pivotal KarMMa trial, which demonstrated best overall and ≥ complete response rates of 73% and 33%, respectively. We report clinical outcomes with standard-of-care (SOC) ide-cel under the commercial Food and Drug Administration label.
Data were retrospectively collected from patients with RRMM who underwent leukapheresis as of February 28, 2022, at 11 US institutions with intent to receive SOC ide-cel. Toxicities were graded per American Society for Transplantation and Cellular Therapy guidelines and managed according to each institution's policies. Responses were graded on the basis of the International Myeloma Working Group response criteria.
One hundred fifty-nine of 196 leukapheresed patients received ide-cel by data cutoff. One hundred twenty (75%) infused patients would have been ineligible for participation in the KarMMa clinical trial because of comorbidities at the time of leukapheresis. Any grade and grade ≥ 3 cytokine release syndrome and neurotoxicity occurred in 82/3% and 18/6%, respectively. Best overall and ≥ complete response rates were 84% and 42%, respectively. At a median follow-up of 6.1 months from chimeric antigen receptor T infusion, the median progression-free survival was 8.5 months (95% CI, 6.5 to not reached) and the median overall survival was 12.5 months (95% CI, 11.3 to not reached). Patients with previous exposure to B-cell maturation antigen-targeted therapy, high-risk cytogenetics, Eastern Cooperative Oncology Group performance status ≥ 2 at lymphodepletion, and younger age had inferior progression-free survival on multivariable analysis.
The safety and efficacy of ide-cel in patients with RRMM in the SOC setting were comparable with those in the phase II pivotal KarMMa trial despite most patients (75%) not meeting trial eligibility criteria.
Immune-related adverse events (irAEs) typically occur within 4 months of starting anti-programmed cell death protein 1 (PD-1)-based therapy anti-PD-1 ± anti-cytotoxic T-lymphocyte-associated protein ...4 (CTLA4), but delayed irAEs (onset >12 months after commencement) can also occur. This study describes the incidence, nature and management of delayed irAEs in patients receiving anti-PD-1-based immunotherapy.
Patients with delayed irAEs from 20 centres were studied. The incidence of delayed irAEs was estimated as a proportion of melanoma patients treated with anti-PD-1-based therapy and surviving >1 year. Onset, clinical features, management and outcomes of irAEs were examined.
One hundred and eighteen patients developed a total of 140 delayed irAEs (20 after initial combination with anti-CTLA4), with an estimated incidence of 5.3% (95% confidence interval 4.0-6.9, 53/999 patients at sites with available data). The median onset of delayed irAE was 16 months (range 12-53 months). Eighty-seven patients (74%) were on anti-PD-1 at irAE onset, 15 patients (12%) were <3 months from the last dose and 16 patients (14%) were >3 months from the last dose of anti-PD-1. The most common delayed irAEs were colitis, rash and pneumonitis; 55 of all irAEs (39%) were ≥grade 3. Steroids were required in 80 patients (68%), as well as an additional immunosuppressive agent in 27 patients (23%). There were two irAE-related deaths: encephalitis with onset during anti-PD-1 and a multiple-organ irAE with onset 11 months after ceasing anti-PD-1. Early irAEs (<12 months) had also occurred in 69 patients (58%), affecting a different organ from the delayed irAE in 59 patients (86%).
Delayed irAEs occur in a small but relevant subset of patients. Delayed irAEs are often different from previous irAEs, may be high grade and can lead to death. They mostly occur in patients still receiving anti-PD-1. The risk of delayed irAE should be considered when deciding the duration of treatment in responding patients. However, patients who stop treatment may also rarely develop delayed irAE.
•The incidence of delayed irAEs >12 months after commencing anti-PD-1 was 5.3%.•Delayed irAEs occurred in 118 patients; these were often high grade (39% G3+) including two delayed irAE-related deaths.•Delayed irAEs were often difficult to manage; 68% required steroids and 23% required an additional immunosuppressive agent.•Most occurred during anti-PD-1 therapy (74%), but delayed irAEs were also reported up to 26 months after stopping anti-PD-1.
Recurrently mutated genes and chromosomal abnormalities have been identified in myelodysplastic syndromes (MDS). We aim to integrate these genomic features into disease classification and ...prognostication.
We retrospectively enrolled 2,043 patients. Using Bayesian networks and Dirichlet processes, we combined mutations in 47 genes with cytogenetic abnormalities to identify genetic associations and subgroups. Random-effects Cox proportional hazards multistate modeling was used for developing prognostic models. An independent validation on 318 cases was performed.
We identify eight MDS groups (clusters) according to specific genomic features. In five groups, dominant genomic features include splicing gene mutations (
,
, and
) that occur early in disease history, determine specific phenotypes, and drive disease evolution. These groups display different prognosis (groups with
mutations being associated with better survival). Specific co-mutation patterns account for clinical heterogeneity within
- and
-related MDS. MDS with complex karyotype and/or
gene abnormalities and MDS with acute leukemia-like mutations show poorest prognosis. MDS with 5q deletion are clustered into two distinct groups according to the number of mutated genes and/or presence of
mutations. By integrating 63 clinical and genomic variables, we define a novel prognostic model that generates personally tailored predictions of survival. The predicted and observed outcomes correlate well in internal cross-validation and in an independent external cohort. This model substantially improves predictive accuracy of currently available prognostic tools. We have created a Web portal that allows outcome predictions to be generated for user-defined constellations of genomic and clinical features.
Genomic landscape in MDS reveals distinct subgroups associated with specific clinical features and discrete patterns of evolution, providing a proof of concept for next-generation disease classification and prognosis.
The human repeated insult patch test (HRIPT) has a history of use in the fragrance industry as a component of safety evaluation, exclusively to confirm the absence of skin sensitization at a defined ...dose.
The aim of the study was to document the accumulated experience from more than 30 years of conducting HRIPTs.
A retrospective collation of HRIPT studies carried out to a consistent protocol was undertaken, with each study comprising a minimum of 100 volunteers.
The HRIPT outcomes from 154 studies on 134 substances using 16,512 volunteers were obtained. Most studies confirmed that at the selected induction/challenge dose, sensitization was not induced. In 0.12% of subjects (n = 20), there was induction of allergy. However, in the last 11 years, only 3 (0.03%) of 9854 subjects became sensitized, perhaps because of improved definition of a safe HRIPT dose from the local lymph node assay and other skin sensitization methodologies, as well as more rigorous application of the standard protocol after publication in 2008. This experience with HRIPTs demonstrates that de novo sensitization induction is rare and becoming rarer, but it plays an important role as an indicator that toxicological predictions from nonhuman test methods (in vivo and in vitro methods) can be imperfect.
Currently, little is known about the association between assessment intensity, burden, data quantity, and data quality in experience sampling method (ESM) studies. Researchers therefore have ...insufficient information to make informed decisions about the design of their ESM study. Our aim was to investigate the effects of different sampling frequencies and questionnaire lengths on burden, compliance, and careless responding. Students (n = 163) received either a 30- or 60-item questionnaire three, six, or nine times per day for 14 days. Preregistered multilevel regression analyses and analyses of variance were used to analyze the effect of design condition on momentary outcomes, changes in those outcomes over time, and retrospective outcomes. Our findings offer support for increased burden and compromised data quantity and quality with longer questionnaires, but not with increased sampling frequency. We therefore advise against the use of long ESM questionnaires, while high-sampling frequencies do not seem to be associated with negative consequences.