Eribulin mesylate (E7389) is a synthetic analog of the marine macrolide halichondrin B, which acts as a novel microtubule
modulator with a distinct mechanism of action. Two eribulin mesylate phase 1 ...studies exploring weekly and 3-weekly schedules
are reported in this issue. These trials show linear pharmacokinetics, a toxicity profile consisting in neutropenia and fatigue,
and early hints of antitumor activity. In this commentary we give a brief historical perspective of the halichondrins and
put into context eribulin mesylate as a novel tubulin modulator.
Display omitted
•Word embeddings (WE) improve bag-of-words disambiguation with Support Vector Machines.•Large word embedding vectors built on large contexts improve performance.•Recurrent networks ...and WE improve disambiguation accuracy of SVM and non-WE features.•Bag-of-words and embeddings with Support Vector Machines have best disambiguation accuracy.
Word sense disambiguation helps identifying the proper sense of ambiguous words in text. With large terminologies such as the UMLS Metathesaurus ambiguities appear and highly effective disambiguation methods are required. Supervised learning algorithm methods are used as one of the approaches to perform disambiguation. Features extracted from the context of an ambiguous word are used to identify the proper sense of such a word. The type of features have an impact on machine learning methods, thus affect disambiguation performance. In this work, we have evaluated several types of features derived from the context of the ambiguous word and we have explored as well more global features derived from MEDLINE using word embeddings. Results show that word embeddings improve the performance of more traditional features and allow as well using recurrent neural network classifiers based on Long-Short Term Memory (LSTM) nodes. The combination of unigrams and word embeddings with an SVM sets a new state of the art performance with a macro accuracy of 95.97 in the MSH WSD data set.
Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. ...diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD.
In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set.
The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE.We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods.
The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, SIK, UILJ, UKNU, UL, UM, UPUK
Tumor-associated macrophages (TAM) in the tumor microenvironment (TME) cooperate with cancer stem cells (CSC) to maintain stemness. We recently identified cluster of differentiation 44 (CD44) as a ...surface marker defining head and neck squamous cell carcinoma (HNSCC) CSC. PI3K-4EBP1-SOX2 activation and signaling regulate CSC properties, yet the upstream molecular control of this pathway and the mechanisms underlying cross-talk between TAM and CSC in HNSCC remain largely unknown. Because CD44 is a molecular mediator in the TME, we propose here that TAM-influenced CD44 signaling could mediate stemness via the PI3K-4EBP1-SOX2 pathway, possibly by modulating availability of hyaluronic acid (HA), the main CD44 ligand. HNSCC IHC was used to identify TAM/CSC relationships, and
coculture spheroid models and
mouse models were used to identify the influence of TAMs on CSC function via CD44. Patient HNSCC-derived TAMs were positively and negatively associated with CSC marker expression at noninvasive and invasive edge regions, respectively. TAMs increased availability of HA and increased cancer cell invasion. HA binding to CD44 increased PI3K-4EBP1-SOX2 signaling and the CSC fraction, whereas CD44-VCAM-1 binding promoted invasive signaling by ezrin/PI3K.
, targeting CD44 decreased PI3K-4EBP1-SOX2 signaling, tumor growth, and CSC. TAM depletion in syngeneic and humanized mouse models also diminished growth and CSC numbers. Finally, a CD44 isoform switch regulated epithelial-to-mesenchymal plasticity as standard form of CD44 and CD44v8-10 determined invasive and tumorigenic phenotypes, respectively. We have established a mechanistic link between TAMs and CSCs in HNSCC that is mediated by CD44 intracellular signaling in response to extracellular signals. SIGNIFICANCE: These findings establish a mechanistic link between tumor cell CD44, TAM, and CSC properties at the tumor-stroma interface that can serve as a vital area of focus for target and drug discovery.
Introduction
Adverse drug reactions (ADRs) are unintended reactions caused by a drug or combination of drugs taken by a patient. The current safety surveillance system relies on spontaneous reporting ...systems (SRSs) and more recently on observational health data; however, ADR detection may be delayed and lack geographic diversity. The broad scope of social media conversations, such as those on Twitter, can include health-related topics. Consequently, these data could be used to detect potentially novel ADRs with less latency. Although research regarding ADR detection using social media has made progress, findings are based on single information sources, and no study has yet integrated drug safety evidence from both an SRS and Twitter.
Objective
The aim of this study was to combine signals from an SRS and Twitter to facilitate the detection of safety signals and compare the performance of the combined system with signals generated by individual data sources.
Methods
We extracted potential drug–ADR posts from Twitter, used Monte Carlo expectation maximization to generate drug safety signals from both the US FDA Adverse Event Reporting System and posts from Twitter, and then integrated these signals using a Bayesian hierarchical model. The results from the integrated system and two individual sources were evaluated using a reference standard derived from drug labels. Area under the receiver operating characteristics curve (AUC) was computed to measure performance.
Results
We observed a significant improvement in the AUC of the combined system when comparing it with Twitter alone, and no improvement when comparing with the SRS alone. The AUCs ranged from 0.587 to 0.637 for the combined SRS and Twitter, from 0.525 to 0.534 for Twitter alone, and from 0.612 to 0.642 for the SRS alone. The results varied because different preprocessing procedures were applied to Twitter.
Conclusion
The accuracy of signal detection using social media can be improved by combining signals with those from SRSs. However, the combined system cannot achieve better AUC performance than data from FAERS alone, which may indicate that Twitter data are not ready to be integrated into a purely data-driven combination system.
In cancer treatment, apoptosis is a well-recognized cell death mechanism through which cytotoxic agents kill tumor cells. Here we report that dying tumor cells use the apoptotic process to generate ...potent growth-stimulating signals to stimulate the repopulation of tumors undergoing radiotherapy. Furthermore, activated caspase 3, a key executioner in apoptosis, is involved in the growth stimulation. One downstream effector that caspase 3 regulates is prostaglandin E(2) (PGE(2)), which can potently stimulate growth of surviving tumor cells. Deficiency of caspase 3 either in tumor cells or in tumor stroma caused substantial tumor sensitivity to radiotherapy in xenograft or mouse tumors. In human subjects with cancer, higher amounts of activated caspase 3 in tumor tissues are correlated with markedly increased rate of recurrence and death. We propose the existence of a cell death-induced tumor repopulation pathway in which caspase 3 has a major role.
Celotno besedilo
Dostopno za:
DOBA, IJS, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
Assessment of the quality of medical evidence available on the web is a critical step in the preparation of systematic reviews. Existing tools that automate parts of this task validate the quality of ...individual studies but not of entire bodies of evidence and focus on a restricted set of quality criteria.
We proposed a quality assessment task that provides an overall quality rating for each body of evidence (BoE), as well as finer-grained justification for different quality criteria according to the Grading of Recommendation, Assessment, Development, and Evaluation formalization framework. For this purpose, we constructed a new data set and developed a machine learning baseline system (EvidenceGRADEr).
We algorithmically extracted quality-related data from all summaries of findings found in the Cochrane Database of Systematic Reviews. Each BoE was defined by a set of population, intervention, comparison, and outcome criteria and assigned a quality grade (high, moderate, low, or very low) together with quality criteria (justification) that influenced that decision. Different statistical data, metadata about the review, and parts of the review text were extracted as support for grading each BoE. After pruning the resulting data set with various quality checks, we used it to train several neural-model variants. The predictions were compared against the labels originally assigned by the authors of the systematic reviews.
Our quality assessment data set, Cochrane Database of Systematic Reviews Quality of Evidence, contains 13,440 instances, or BoEs labeled for quality, originating from 2252 systematic reviews published on the internet from 2002 to 2020. On the basis of a 10-fold cross-validation, the best neural binary classifiers for quality criteria detected risk of bias at 0.78 F
(P=.68; R=0.92) and imprecision at 0.75 F
(P=.66; R=0.86), while the performance on inconsistency, indirectness, and publication bias criteria was lower (F
in the range of 0.3-0.4). The prediction of the overall quality grade into 1 of the 4 levels resulted in 0.5 F
. When casting the task as a binary problem by merging the Grading of Recommendation, Assessment, Development, and Evaluation classes (high+moderate vs low+very low-quality evidence), we attained 0.74 F
. We also found that the results varied depending on the supporting information that is provided as an input to the models.
Different factors affect the quality of evidence in the context of systematic reviews of medical evidence. Some of these (risk of bias and imprecision) can be automated with reasonable accuracy. Other quality dimensions such as indirectness, inconsistency, and publication bias prove more challenging for machine learning, largely because they are much rarer. This technology could substantially reduce reviewer workload in the future and expedite quality assessment as part of evidence synthesis.
Celotno besedilo
Dostopno za:
DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
The ability to produce monoclonal antibodies with defined and distinct specificities has resulted in a vast spectrum of therapeutic monoclonal antibodies including bispecific antibodies (BsAbs). ...Several types of BsAbs have been produced but the most well-known of these are trispecific antibodies (TrAbs or TrioMabs) and bispecific T cell engager antibodies (BiTE). TrAbs have two variable segments for antigen binding and an Fc component to recruit immune cells. Catumaxomab is a TrAb that has orphan drug status from the Food and Drug Administration (FDA) for EpCam positive gastric and ovarian tumors and was previously approved by the European Medicinal Agency (EMA) for the same indication. One arm of catumaxomab binds to EpCAM, the other binds to CD3 on T cells and the Fc portion recruits immune cells. Catumaxomab is no longer being produced by the manufacturer due to logistic considerations and hence not available in the European market.
Blinatumomab is a BiTE that comprises of two variable segments only with one arm binding to CD19 and the other binding to CD3. Blinatumomab has been approved for relapsed or refractory B-cell precursor ALL in adults and children by the FDA.
There are over 50 bispecific antibodies currently on clinical trials for various malignancies and the hope is that in the future many of these, with better understanding of principles and techniques of production, will provide treatment options for many different types of cancer.
The evolving trends of mobility, cloud computing and collaboration have blurred the perimeter separating corporate networks from the wider world. These new tools and business models enhance ...productivity and present new opportunities for competitive advantage although they also introduce new risks. Currently, security is one of the most limiting issues for technological development in fields such as Internet of Things or Cyber-physical systems. This work contributes to the cyber security research field with a design that can incorporate advanced scheduling algorithms and predictive models in a parallel and distributed way, in order to improve intrusion detection in the current scenario, where increased demand for global and wireless interconnection has weakened approaches based on protection tasks running only on specific perimeter security devices. The aim of this paper is to provide a framework to properly distribute intrusion detection system (IDS) tasks, considering security requirements and variable availability of computing resources. To accomplish this, we propose a novel approach, which promotes the integration of personal and enterprise computing resources with externally supplied cloud services, in order to handle the security requirements. For example, in a business environment, there is a set information resources that need to be specially protected, including data handled and transmitted by small mobile devices. These devices can execute part of the IDS tasks necessary for self-protection, but other tasks could be derived to other more powerful systems. This integration must be achieved in a dynamic way: cloud resources are used only when necessary, minimizing utility computing costs and security problems posed by cloud, but preserving local resources when those are required for business processes or user experience. In addition to satisfying the main objective, the strengths and benefits of the proposed framework can be explored in future research. This framework provides the integration of different security approaches, including well-known and recent advances in intrusion detection as well as supporting techniques that increase the resilience of the system. The proposed framework consists of: (1) a controller component, which among other functions, decides the source and target hosts for each data flow; and (2) a switching mechanism, allowing tasks to redirect data flows as established by the controller scheduler. The proposed approach has been validated through a number of experiments. First, an experimental DIDS is designed by selecting and combining a number of existing IDS solutions. Then, a prototype implementation of the proposed framework, working as a proof of concept, is built. Finally, singular tests showing the feasibility of our approach and providing a good insight into future work are performed.
Display omitted
•Novel framework for scheduling intrusion detection tasks in IoT.•Flexible integration of cloud computing and mobile computing resources.•Architecture for deployment of state-of-art methods and techniques.•System resilience achieved by allowing multiple task instances in different devices.•Experimental results show resource utilization and performance benefits.