Although a vaccine could be available as early as 2016, vector control remains the primary approach used to prevent dengue, the most common and widespread arbovirus of humans worldwide. We reviewed ...the evidence for effectiveness of vector control methods in reducing its transmission.
Studies of any design published since 1980 were included if they evaluated method(s) targeting Aedes aegypti or Ae. albopictus for at least 3 months. Primary outcome was dengue incidence. Following Cochrane and PRISMA Group guidelines, database searches yielded 960 reports, and 41 were eligible for inclusion, with 19 providing data for meta-analysis. Study duration ranged from 5 months to 10 years. Studies evaluating multiple tools/approaches (23 records) were more common than single methods, while environmental management was the most common method (19 studies). Only 9/41 reports were randomized controlled trials (RCTs). Two out of 19 studies evaluating dengue incidence were RCTs, and neither reported any statistically significant impact. No RCTs evaluated effectiveness of insecticide space-spraying (fogging) against dengue. Based on meta-analyses, house screening significantly reduced dengue risk, OR 0.22 (95% CI 0.05-0.93, p = 0.04), as did combining community-based environmental management and water container covers, OR 0.22 (95% CI 0.15-0.32, p<0.0001). Indoor residual spraying (IRS) did not impact significantly on infection risk (OR 0.67; 95% CI 0.22-2.11; p = 0.50). Skin repellents, insecticide-treated bed nets or traps had no effect (p>0.5), but insecticide aerosols (OR 2.03; 95% CI 1.44-2.86) and mosquito coils (OR 1.44; 95% CI 1.09-1.91) were associated with higher dengue risk (p = 0.01). Although 23/41 studies examined the impact of insecticide-based tools, only 9 evaluated the insecticide susceptibility status of the target vector population during the study.
This review and meta-analysis demonstrate the remarkable paucity of reliable evidence for the effectiveness of any dengue vector control method. Standardised studies of higher quality to evaluate and compare methods must be prioritised to optimise cost-effective dengue prevention.
Neural Network Acceptability Judgments Warstadt, Alex; Singh, Amanpreet; Bowman, Samuel R.
Transactions of the Association for Computational Linguistics,
11/2019, Letnik:
7
Journal Article
Recenzirano
Odprti dostop
This paper investigates the ability of artificial neural networks to judge the grammatical acceptability of a sentence, with the goal of testing their linguistic competence. We introduce the Corpus ...of Linguistic Acceptability (CoLA), a set of 10,657 English sentences labeled as grammatical or ungrammatical from published linguistics literature. As baselines, we train several recurrent neural network models on acceptability classification, and find that our models outperform unsupervised models by Lau et al. (2016) on CoLA. Error-analysis on specific grammatical phenomena reveals that both Lau et al.’s models and ours learn systematic generalizations like subject-verb-object order. However, all models we test perform far below human level on a wide range of grammatical constructions.
Memory function involves both the ability to remember details of individual experiences and the ability to link information across events to create new knowledge. Prior research has identified the ...ventromedial prefrontal cortex (VMPFC) and the hippocampus as important for integrating across events in the service of generalization in episodic memory. The degree to which these memory integration mechanisms contribute to other forms of generalization, such as concept learning, is unclear. The present study used a concept-learning task in humans (both sexes) coupled with model-based fMRI to test whether VMPFC and hippocampus contribute to concept generalization, and whether they do so by maintaining specific category exemplars or abstract category representations. Two formal categorization models were fit to individual subject data: a prototype model that posits abstract category representations and an exemplar model that posits category representations based on individual category members. Latent variables from each of these models were entered into neuroimaging analyses to determine whether VMPFC and the hippocampus track prototype or exemplar information during concept generalization. Behavioral model fits indicated that almost three-quarters of the subjects relied on prototype information when making judgments about new category members. Paralleling prototype dominance in behavior, correlates of the prototype model were identified in VMPFC and the anterior hippocampus with no significant exemplar correlates. These results indicate that the VMPFC and portions of the hippocampus play a broad role in memory generalization and that they do so by representing abstract information integrated from multiple events.
Whether people represent concepts as a set of individual category members or by deriving generalized concept representations abstracted across exemplars has been debated. In episodic memory, generalized memory representations have been shown to arise through integration across events supported by the ventromedial prefrontal cortex (VMPFC) and hippocampus. The current study combined formal categorization models with fMRI data analysis to show that the VMPFC and anterior hippocampus represent abstract prototype information during concept generalization, contributing novel evidence of generalized concept representations in the brain. Results indicate that VMPFC-hippocampal memory integration mechanisms contribute to knowledge generalization across multiple cognitive domains, with the degree of abstraction of memory representations varying along the long axis of the hippocampus.
Despite doubts about methods used and the association between vector density and dengue transmission, routine sampling of mosquito vector populations is common in dengue-endemic countries worldwide. ...This study examined the evidence from published studies for the existence of any quantitative relationship between vector indices and dengue cases.
From a total of 1205 papers identified in database searches following Cochrane and PRISMA Group guidelines, 18 were included for review. Eligibility criteria included 3-month study duration and dengue case confirmation by WHO case definition and/or serology. A range of designs were seen, particularly in spatial sampling and analyses, and all but 3 were classed as weak study designs. Eleven of eighteen studies generated Stegomyia indices from combined larval and pupal data. Adult vector data were reported in only three studies. Of thirteen studies that investigated associations between vector indices and dengue cases, 4 reported positive correlations, 4 found no correlation and 5 reported ambiguous or inconclusive associations. Six out of 7 studies that measured Breteau Indices reported dengue transmission at levels below the currently accepted threshold of 5.
There was little evidence of quantifiable associations between vector indices and dengue transmission that could reliably be used for outbreak prediction. This review highlighted the need for standardized sampling protocols that adequately consider dengue spatial heterogeneity. Recommendations for more appropriately designed studies include: standardized study design to elucidate the relationship between vector abundance and dengue transmission; adult mosquito sampling should be routine; single values of Breteau or other indices are not reliable universal dengue transmission thresholds; better knowledge of vector ecology is required.
Cryptic allosteric sites—transient pockets in a folded protein that are invisible to conventional experiments but can alter enzymatic activity via allosteric communication with the active site—are a ...promising opportunity for facilitating drug design by greatly expanding the repertoire of available drug targets. Unfortunately, identifying these sites is difficult, typically requiring resource-intensive screening of large libraries of small molecules. Here, we demonstrate that Markov state models built from extensive computer simulations (totaling hundreds of microseconds of dynamics) can identify prospective cryptic sites from the equilibrium fluctuations of three medically relevant proteins—β-lactamase, interleukin-2, and RNase H—even in the absence of any ligand. As in previous studies, our methods reveal a surprising variety of conformations—including bound-like configurations—that implies a role for conformational selection in ligand binding. Moreover, our analyses lead to a number of unique insights. First, direct comparison of simulations with and without the ligand reveals that there is still an important role for an induced fit during ligand binding to cryptic sites and suggests new conformations for docking. Second, correlations between amino acid sidechains can convey allosteric signals even in the absence of substantial backbone motions. Most importantly, our extensive sampling reveals a multitude of potential cryptic sites—consisting of transient pockets coupled to the active site—even in a single protein. Based on these observations, we propose that cryptic allosteric sites may be even more ubiquitous than previously thought and that our methods should be a valuable means of guiding the search for such sites.
Markov state models (MSMs)--or discrete-time master equation models--are a powerful way of modeling the structure and function of molecular systems like proteins. Unfortunately, MSMs with ...sufficiently many states to make a quantitative connection with experiments (often tens of thousands of states even for small systems) are generally too complicated to understand. Here, I present a bayesian agglomerative clustering engine (BACE) for coarse-graining such Markov models, thereby reducing their complexity and making them more comprehensible. An important feature of this algorithm is its ability to explicitly account for statistical uncertainty in model parameters that arises from finite sampling. This advance builds on a number of recent works highlighting the importance of accounting for uncertainty in the analysis of MSMs and provides significant advantages over existing methods for coarse-graining Markov state models. The closed-form expression I derive here for determining which states to merge is equivalent to the generalized Jensen-Shannon divergence, an important measure from information theory that is related to the relative entropy. Therefore, the method has an appealing information theoretic interpretation in terms of minimizing information loss. The bottom-up nature of the algorithm likely makes it particularly well suited for constructing mesoscale models. I also present an extremely efficient expression for bayesian model comparison that can be used to identify the most meaningful levels of the hierarchy of models from BACE.
Molecular recognition is determined by the structure and dynamics of both a protein and its ligand, but it is difficult to directly assess the role of each of these players. In this study, we use ...Markov State Models (MSMs) built from atomistic simulations to elucidate the mechanism by which the Lysine-, Arginine-, Ornithine-binding (LAO) protein binds to its ligand. We show that our model can predict the bound state, binding free energy, and association rate with reasonable accuracy and then use the model to dissect the binding mechanism. In the past, this binding event has often been assumed to occur via an induced fit mechanism because the protein's binding site is completely closed in the bound state, making it impossible for the ligand to enter the binding site after the protein has adopted the closed conformation. More complex mechanisms have also been hypothesized, but these have remained controversial. Here, we are able to directly observe roles for both the conformational selection and induced fit mechanisms in LAO binding. First, the LAO protein tends to form a partially closed encounter complex via conformational selection (that is, the apo protein can sample this state), though the induced fit mechanism can also play a role here. Then, interactions with the ligand can induce a transition to the bound state. Based on these results, we propose that MSMs built from atomistic simulations may be a powerful way of dissecting ligand-binding mechanisms and may eventually facilitate a deeper understanding of allostery as well as the prediction of new protein-ligand interactions, an important step in drug discovery.
We introduce The Benchmark of Linguistic Minimal Pairs (BLiMP),
a challenge set for evaluating the linguistic knowledge of language models (LMs) on major grammatical phenomena in English. BLiMP ...consists of 67 individual datasets, each containing 1,000 minimal pairs—that is, pairs of minimally different sentences that contrast in grammatical acceptability and isolate specific phenomenon in syntax, morphology, or semantics. We generate the data according to linguist-crafted grammar templates, and human aggregate agreement with the labels is 96.4%. We evaluate
-gram, LSTM, and Transformer (GPT-2 and Transformer-XL) LMs by observing whether they assign a higher probability to the acceptable sentence in each minimal pair. We find that state-of-the-art models identify morphological contrasts related to agreement reliably, but they struggle with some subtle semantic and syntactic phenomena, such as negative polarity items and extraction islands.
The SARS-CoV-2 nucleocapsid (N) protein is an abundant RNA-binding protein critical for viral genome packaging, yet the molecular details that underlie this process are poorly understood. Here we ...combine single-molecule spectroscopy with all-atom simulations to uncover the molecular details that contribute to N protein function. N protein contains three dynamic disordered regions that house putative transiently-helical binding motifs. The two folded domains interact minimally such that full-length N protein is a flexible and multivalent RNA-binding protein. N protein also undergoes liquid-liquid phase separation when mixed with RNA, and polymer theory predicts that the same multivalent interactions that drive phase separation also engender RNA compaction. We offer a simple symmetry-breaking model that provides a plausible route through which single-genome condensation preferentially occurs over phase separation, suggesting that phase separation offers a convenient macroscopic readout of a key nanoscopic interaction.
This Account highlights recent advances and discusses major challenges in investigations of cryptic (hidden) binding sites by molecular simulations. Cryptic binding sites are not visible in protein ...targets crystallized without a ligand and only become visible crystallographically upon binding events. These sites have been shown to be druggable and might provide a rare opportunity to target difficult proteins. However, due to their hidden nature, they are difficult to find through experimental screening. Computational methods based on atomistic molecular simulations remain one of the best approaches to identify and characterize cryptic binding sites. However, not all methods are equally efficient. Some are more apt at quickly probing protein dynamics but do not provide thermodynamic or druggability information, while others that are able to provide such data are demanding in terms of time and resources. Here, we review the recent contributions of mixed-solvent simulations, metadynamics, Markov state models, and other enhanced sampling methods to the field of cryptic site identification and characterization. We discuss how these methods were able to provide precious information on the nature of the site opening mechanisms, to predict previously unknown sites which were used to design new ligands, and to compute the free energy landscapes and kinetics associated with the opening of the sites and the binding of the ligands. We highlight the potential and the importance of such predictions in drug discovery, especially for difficult ("undruggable") targets. We also discuss the major challenges in the field and their possible solutions.