Glyphosate is the most used herbicide on the planet because of its excellent efficacy on almost all weed species and due to the large-scale adoption of transgenic, glyphosate-resistant (GR) crops. ...Agnes Rimando became an expert in glyphosate analysis almost 20 years ago to support research on GR crop safety and on mechanisms of evolved glyphosate resistance by weeds. Her work was the first to show that the amount of glyphosate and its primary metabolite aminomethylphosphonic acid (AMPA) that accumulates in GR soybean seed from plants treated with approved glyphosate doses can approach their legal limits. However, she later found that only trace amounts of these compounds accumulate in the seed of GR maize treated with recommended glyphosate doses. She showed that GR canola, the only transgenic crop with a transgene encoding an enzyme for degradation of glyphosate, metabolizes glyphosate to AMPA very rapidly. Her work was instrumental in providing support for the view that “yellow flash” symptoms sometimes observed in field-grown GR soybeans are due to accumulation of enough AMPA to cause mild phytotoxicity. She did the chemical analyses in the only paper to survey the capacity of an array of plant species to metabolize glyphosate to AMPA. She found a wide range in this capacity, with grasses with little or no metabolism of glyphosate to AMPA and with legumes readily metabolizing glyphosate. Lastly, she found no evidence of enhanced degradation of glyphosate to be a mechanism of evolved resistance to glyphosate by two weed species but that it might be involved in natural tolerance to glyphosate of some weeds.
Background and aims: The cancer-protective properties of vegetable consumption are most likely mediated through 'bioactive compounds' that induce a variety of physiologic functions including acting ...as direct or indirect antioxidants, regulating enzymes and controlling apoptosis and the cell cycle. The 'functional food' industry has produced and marketed foods enriched with bioactive compounds, but there are no universally accepted criteria for judging efficacy of the compounds or enriched foods. Scope: Carotenoids, glucosinolates, polyphenols and selenocompounds are families of bioactive compounds common to vegetables. Although numerous studies have investigated the agricultural and human health implications of enriching foods with one or more of these compounds, inadequate chemical identification of compounds, lack of relevant endpoints and inconsistencies in mechanistic hypotheses and experimental methodologies leave many critical gaps in our understanding of the benefits of such compounds. This review proposes a decision-making process for determining whether there is reasonable evidence of efficacy for the both the compound and the enriched food. These criteria have been used to judge the evidence of efficacy for cancer prevention by carotenoids, polyphenols, glucosinolates and selenocompounds. Conclusions: The evidence of efficacy is weak for carotenoids and polyphenols; the evidence is stronger for glucosinolates and lycopene, but production of enriched foods still is premature. Additionally there is unacceptable variability in the amount and chemical form of these compounds in plants. The evidence of efficacy for selenocompounds is strong, but the clinical study that is potentially the most convincing is still in progress; also the variability in amount and chemical form of Se in plants is a problem. These gaps in understanding bioactive compounds and their health benefits should not serve to reduce research interest but should, instead, encourage plant and nutritional scientists to work together to develop strategies for improvement of health through food.
•The total phenolic and quercetin content in ethanolic extract and different fractions were studied.•16 phenolic compounds were identified using HPLC/HRMS method.•The dichloromethane and ethyl ...acetate fraction possessed better effects of inhibiting AGE formation.•Both mono- and di-methylglyoxal quercetin adducts were detected using HPLC–ESI-MSn.
The objective of this study was to investigate the inhibitory effects of Camellia nitidissima Chi (CNC) on the advanced glycation end-product (AGE) formation. CNC was extracted with ethanol and further separated into dichloromethane, ethyl acetate, n-butanol, and water soluble fractions. Ethyl acetate fraction had the highest total phenolic and quercetin content compared with other fractions. Sixteen phenolic compounds were identified using HPLC Triple TOF MS/MS. Bovine serum albumin (BSA)–glucose assay showed that dichloromethane and ethyl acetate fraction inhibited AGE formation by 88.1% and 87.5% at 2.5mg/mL. BSA–methylglyoxal assay showed that ethyl acetate fraction inhibited 54.1% AGE formation while dichloromethane fraction inhibited 28.1%. Over 96.0% of methylglyoxal was scavenged by different fractions within 12h. Both mono- and di-methylglyoxal quercetin adducts were identified after incubating quercetin with methylglyoxal using HPLC–ESI-MSn. The results in this study suggest that CNC extracts inhibited AGEs formation in part through scavenging methylglyoxal by phenolic compounds.
The ability to replicate research findings is a cornerstone of science. Recent large-scale attempts to reproduce published psychology studies have resulted in replication of fewer than half of the ...statistically significant results (Open Science Collaboration, 2015). In this article, we present evidence of a failure to conceptually replicate a study published in Psychological Science (Berman, Jonides, & Kaplan, 2008, Experiment 2). In their publication, Berman et al. (2008) reported that exposure to simulated natural settings restored executive attention as measured by the Attention Network Test in 12 young adults. In our article, we present a conceptual replication attempt with 31 young adults as well as meta-analytic evidence to show that simulated nature does not seem to reliably restore executive attention when combining evidence from 14 studies (N = 612) that also used the executive portion of the Attention Network Test. While simulated nature exposure may provide benefits for some cognitive tasks, our findings and those of other recent meta-analyses question the reliability of simulated nature settings to restore executive attention. We discuss suggestions for future research.
Abstract
Objective
Neuropsychologists have historically used demographically-adjusted normative data to account for race-associated differences in neurocognitive performance. Using race-based norms ...has become increasingly controversial, particularly following the National Football League 2014 concussion settlement, as such practices may perpetuate systemic inequities and biases. Adjusting for the effects of social determinants of brain health may be an alternative approach to race-based norms. A measurable social determinant of brain health that disproportionately affects Black, Indigenous, and People of Color (BIPOC) is health literacy, which has been well-associated with verbal fluency performance. The current study evaluated the incremental value of health literacy and race to predict neurocognitive performance on a verbal fluency measure.
Method
The sample comprised 71 BIPOC and 45 White participants (ages 18–87) who underwent an outpatient neuropsychological evaluation. Verbal fluency was measured via the F-A-S and Controlled Animal Word Association Test. Health literacy was measured via the Short Assessment of Health Literacy-English. Demographics included age, sex, race, and years of formal education. Race was categorized as White or BIPOC since binary classification is common in race-based norms.
Results
Hierarchical regression analysis revealed that the combination of race and health literacy have a significantly larger effect on verbal fluency performance than race alone (p < 0.001). Including this interaction in the model explained 13.07% more variance in neurocognitive performance than race as an independent predictor (R-squared increased from 4.83% to 17.90%).
Conclusions
These findings suggest health literacy may be more helpful for accounting for group differences in neurocognitive performance than race when developing normative data.
In 1987, Parente used the Delphi method to predict changes in the field of cognitive rehabilitation therapy (CRT). Fifty licensed professionals provided predictions about the likely occurrence and ...probable time courses for 31 scenarios that could possibly have occurred over the 30-year interval between 1987 and 2000+. It has now been 30 years since the initial polling; thus, the purpose of this study was to evaluate the accuracy of these Delphic predictions, via two validation methods. First, we contacted and reviewed statistical information from nationwide data bases (i.e., Center for Disease Control and Prevention, and the Brain Injury Association of America) to see If the scenarios occurred. Second, we polled 12 additional professionals, most of whom had practiced in the field of CRT during the polling period and who still maintained an active practice to assess When the various remaining scenarios had occurred. In this study, probability of occurrence accuracy was approximately 80%, although there was a significant bias towards false positives. Time course predictions were accurate within 1–5 years, although there was a general bias towards underestimating the occurrence of the events.
•The Delphi method provides accurate predictions across a 30-year time span.•The Delphi method predicts whether an event will occur with 80% accuracy.•The Delphi method predicts when an event will occur within a timeframe of one to five years.•The Delphi method can help understand the developing trends among the field of Cognitive Rehabilitation Therapy.•Polling a group of experts and using public domain statistics permits an effective and accurate cross-validation of results.
This study investigated the Wechsler Adult Intelligence Scale-Fourth Edition Letter-Number Sequencing (LNS) subtest as an embedded performance validity indicator among adults undergoing an ...attention-deficit/hyperactivity disorder (ADHD) evaluation, and its potential incremental value over Reliable Digit Span (RDS).
This cross-sectional study comprised 543 adults who underwent neuropsychological evaluation for ADHD. Patients were divided into valid (
= 480) and invalid (
= 63) groups based on multiple criterion performance validity tests.
LNS total raw scores, age-corrected scaled scores, and age- and education-corrected T-scores demonstrated excellent classification accuracy (area under the curve of .84, .83, and .82, respectively). The optimal cutoff for LNS raw score (≤16), age-corrected scaled score (≤7), and age- and education-corrected T-score (≤36) yielded .51 sensitivity and .94 specificity. Slightly lower sensitivity (.40) and higher specificity (.98) was associated with a more conservative T-score cutoff of ≤33. Multivariate models incorporating both LNS and RDS improved classification accuracy (area under the curve of .86), and LNS scores explained a significant but modest proportion of variance in validity status above and beyond RDS. Chaining LNS T-score of ≤33 with RDS cutoff of ≤7 increased sensitivity to .69 while maintaining ≥.90 specificity.
Findings provide preliminary evidence for the criterion and construct validity of LNS as an embedded validity indicator in ADHD evaluations. Practitioners are encouraged to use LNS T-score cutoff of ≤33 or ≤36 to assess the validity of obtained test data. Employing either of these LNS cutoffs with RDS may enhance the detection of invalid performance.
Background: Impaired working memory, attention, and processing speed are common in individuals with traumatic brain injury (TBI) and specific learning disorder (SLD). Yet, there is a paucity of ...research that has examined cognitive differences between these groups.
Objective: The current study examined potential group differences between individuals with TBI and SLD on performance-based tests of working memory, attention, and processing speed. Subsequently, the study examined whether just processing speed tests could discriminate persons with TBI versus SLD.
Method: The authors analyzed archival data to assess differences between 39 adult participants with moderate-severe TBI versus 57 adult participants with SLD on the Trail Making Test Part A, Trail Making Test Part B, Digit Span test, and Symbol Search test.
Results: 95% confidence intervals revealed that participants with TBI performed significantly worse on the Trail Making Test Part A and Symbol Search test. Logistic regression analysis procedures revealed that Trail Making Test Part A was the most sensitive discriminator.
Conclusion: Diagnosis of moderate-severe TBI compared to SLD can be determined by poor performance on measures of visual scanning and processing speed. These findings may be used for diagnostic interpretation and treatment planning by clinicians.