Introduction:
The optimal timing for repeat evaluation of a cytologically benign thyroid nodule greater than 1 cm is uncertain. Arguably, the most important determinant is the disease-specific ...mortality resulting from an undetected thyroid cancer. Presently there exist no data that evaluate this important end point.
Methods:
We studied the long-term status of all patients evaluated in our thyroid nodule clinic between 1995 and 2003 with initially benign fine-needle aspiration (FNA) cytology. The follow-up interval was defined from the time of the initial benign FNA to any one of the following factors: thyroidectomy, death, or the most recent clinic visit documented anywhere in our health care system. We sought to determine the optimal timing for repeat assessment based on the identification of falsely benign malignancy and, most important, disease-related mortality due to a missed diagnosis.
Results:
One thousand three hundred sixty-nine patients with 2010 cytologically benign nodules were followed up for an average of 8.5 years (range 0.25–18 y). Thirty deaths were documented, although zero were attributed to thyroid cancer. Eighteen false-negative thyroid malignancies were identified and removed at a mean 4.5 years (range 0.3–10 y) after the initial benign aspiration. None had distant metastasis, and all are alive presently at an average of 11 years after the initial falsely benign FNA. Separate analysis demonstrates that patients with initially benign nodules who subsequently sought thyroidectomy for compressive symptoms did so an average of 4.5 years later.
Conclusions:
An initially benign FNA confers negligable mortality risk during long-term follow-up despite a low risk of identifying several such nodules as thyroid cancer. Because such malignancies appear adequately treated despite detection at a mean 4.5 years after falsely benign cytology, these data support a recommendation for repeat thyroid nodule evaluation 2–4 years after the initial benign FNA.
Working memory (WM), the representation of information held accessible for manipulation over time, is an essential component of all higher cognitive abilities. It allows for complex behaviors that go ...beyond simple stimulus-response associations and inflexible behavioral patterns. WM capacity determines how many different pieces of information (items) can be used for these cognitive processes, and in humans, it correlates with fluid intelligence. As such, WM might be a useful tool for comparison of cognition across species. WM can be tested using comparatively simple behavioral protocols, based on operant conditioning, in a multitude of different species. Species-specific contextual variables that influence an animal’s performance on a non-cognitive level are controlled by adapting the WM paradigm. The neuronal mechanisms by which WM emerges in the brain, as sustained neuronal activity, are comparable between the different species studied (mammals and birds), as are the areas of the brain in which WM activity can be measured. Thus WM is comparable between vastly different species within their respective niches, accounting for specific contextual variables and unique adaptations. By approaching the question of “general cognitive abilities” or “intelligence” within the animal kingdom from the perspective of WM, the complexity of the core question at hand is reduced to a fundamental memory system required to allow for complex cognitive abilities. This article argues that measuring WM can be a suitable addition to the toolkit of comparative cognition. By measuring WM on a behavioral level and going beyond behavior to the underlying physiological processes, qualitative and quantitative differences in cognition between different animal species can be identified, free of contextual restraints.
This paper describes the simulative approach to calibrate an already extremely highly turbocharged industrial diesel engine for higher low-speed torque. The engine, which is already operating at its ...cylinder-pressure maximum, is to achieve close to 30 bar effective mean pressure through suitable calibration between the compression ratio, piston-bowl shape and injection strategy. The basic idea of the study is to lower the compression ratio for even higher injection masses and boost pressures, with the resulting disadvantages in the area of emissions and fuel consumption being partially compensated for by optimizations in the areas of piston shape and injection strategy. The simulations primarily involve the use of the 3D CFD software Converge CFD for in-cylinder calibration and a fully predictive 1D full-engine model in GT Suite. The simulations are based on a two-stage turbocharged 1950 cc four-cylinder industrial diesel engine, which is used for validation of the initial simulation. With the maximum increase in fuel mass and boost pressure, the effective mean pressure could be increased up to 28 bar, while specific consumption increased only slightly. Depending on the geometry, NOx or CO and UHC emissions could be reduced.
Complex cognition relies on flexible working memory, which is severely limited in its capacity. The neuronal computations underlying these capacity limits have been extensively studied in humans and ...in monkeys, resulting in competing theoretical models. We probed the working memory capacity of crows (
) in a change detection task, developed for monkeys (
), while we performed extracellular recordings of the prefrontal-like area nidopallium caudolaterale. We found that neuronal encoding and maintenance of information were affected by item load, in a way that is virtually identical to results obtained from monkey prefrontal cortex. Contemporary neurophysiological models of working memory employ divisive normalization as an important mechanism that may result in the capacity limitation. As these models are usually conceptualized and tested in an exclusively mammalian context, it remains unclear if they fully capture a general concept of working memory or if they are restricted to the mammalian neocortex. Here, we report that carrion crows and macaque monkeys share divisive normalization as a neuronal computation that is in line with mammalian models. This indicates that computational models of working memory developed in the mammalian cortex can also apply to non-cortical associative brain regions of birds.
In this article, we detail how the rise of executive-centered partisanship has transformed president-Senate relations since 1993. We argue that the growing centrality of the president as a figurehead ...for their party has produced incentives for both co-partisans and out-partisans. We use a measure of presidential "success" to model variation over time and between individual senators. We show that rising presidential partisanship has increased the likelihood for out-partisans to oppose the president's legislative position, even after controlling for other markers of partisan polarization. This relationship is strongest among electorally vulnerable out-partisans. In addition, our data suggest that Republican out-partisans asymmetrically oppose Democratic presidents. We conclude that the growing centrality of the presidency in party affairs has had effects beyond administrative preemption of the legislative process; it has increasingly set a hard limit on bi-partisan cooperation on legislation and nominee confirmations in the Senate.
Ore flow has been identified as one of the critical points of developing the novel mining method Raise Caving (RC). The focus of this contribution is on the key issues of ore-flow and on how to ...encounter them to enable a safe, efficient, and economic mining operation.
In the de-stressing phase of RC, narrow, tabular de-stressing slots are developed via raises, only the blast swell is drawn, and hence they stay filled with blasted material at all times to support the hanging wall in order to prevent early caving and dilution ingress. To simultaneously enable a free surface at the top for blasting, the extracted amount from the bottom needs to be monitored consistently. Another point in the narrow de-stressing slots is to prevent hang-ups by an adequate slot thickness. Hang-ups in higher zones are especially critical as they are difficult to resolve and may lead to unnoticed hanging wall caving.
In the following production phase, production stopes are developed in the de-stressed area behind the slots. During this phase, the stopes are always filled with blasted material as a support for the hanging wall. They are only drawn-empty at a later point after the stopes are fully established. The most important issue for production stopes is to prevent early dilution in order to extract the deposit economically. Additionally, the free surface needs to be prepared for upcoming blasts. To optimize the extraction, a uniform flow towards the draw-points needs to be created. The mentioned points can be addressed by a proper mine design and draw strategy. The parts of the mine design include the draw-point spacing, the draw-point position, and the draw-bell layout. Further on, the draw strategy must enable a uniform draw, guide the ore down the inclined stope, and prevent early dilution. The critical points mentioned in this contribution are approached by the R&D with empirical, numerical, and analytical means.
The liver is the primary site for the metabolism and detoxification of many compounds, including pharmaceuticals. Consequently, it is also the primary location for many adverse reactions. As the ...liver is not readily accessible for sampling in humans; rodent or cell line models are often used to evaluate potential toxic effects of a novel compound or candidate drug. However, relating the results of animal and in vitro studies to relevant clinical outcomes for the human in vivo situation still proves challenging. In this study, we incorporate principles of transfer learning within a deep artificial neural network allowing us to leverage the relative abundance of rat in vitro and in vivo exposure data from the Open TG-GATEs data set to train a model to predict the expected pattern of human in vivo gene expression following an exposure given measured human in vitro gene expression. We show that domain adaptation has been successfully achieved, with the rat and human in vitro data no longer being separable in the common latent space generated by the network. The network produces physiologically plausible predictions of human in vivo gene expression pattern following an exposure to a previously unseen compound. Moreover, we show the integration of the human in vitro data in the training of the domain adaptation network significantly improves the temporal accuracy of the predicted rat in vivo gene expression pattern following an exposure to a previously unseen compound. In this way, we demonstrate the improvements in prediction accuracy that can be achieved by combining data from distinct domains.
In clinical trials, animal and cell line models are often used to evaluate the potential toxic effects of a novel compound or candidate drug before progressing to human trials. However, relating the ...results of animal and in vitro model exposures to relevant clinical outcomes in the human in vivo system still proves challenging, relying on often putative orthologs. In recent years, multiple studies have demonstrated that the repeated dose rodent bioassay, the current gold standard in the field, lacks sufficient sensitivity and specificity in predicting toxic effects of pharmaceuticals in humans. In this study, we evaluate the potential of deep learning techniques to translate the pattern of gene expression measured following an exposure in rodents to humans, circumventing the current reliance on orthologs, and also from in vitro to in vivo experimental designs. Of the applied deep learning architectures applied in this study the convolutional neural network (CNN) and a deep artificial neural network with bottleneck architecture significantly outperform classical machine learning techniques in predicting the time series of gene expression in primary human hepatocytes given a measured time series of gene expression from primary rat hepatocytes following exposure in vitro to a previously unseen compound across multiple toxicologically relevant gene sets. With a reduction in average mean absolute error across 76 genes that have been shown to be predictive for identifying carcinogenicity from 0.0172 for a random regression forest to 0.0166 for the CNN model (p < 0.05). These deep learning architecture also perform well when applied to predict time series of in vivo gene expression given measured time series of in vitro gene expression for rats.
This article explores the interface between lifelong learning policies and the definition of social vulnerability of young adults in two regions located within the European Union. Girona comprises a ...constellation of small towns with important industry, service and hospitality sectors. Vienna is a global city where many key international operators are based and employ a large number of highly qualified professionals. The article explores to what extent the meta-governance and the 'causal narratives' of lifelong learning policies contribute towards shaping the prevailing images of youth vulnerability in these regions. In Girona, bureaucratic governance patterns lifelong learning policies, which strongly rely on the potential of career guidance to encourage the youth to undertake further education. Correspondingly, policy designs and professional discourses emphasise that the beneficiaries previously failed at school. In Vienna, authorities govern lifelong learning by means of both bureaucracy and complex networks of employers and non-profit organisations. The 'causal narrative' of the policies straightforwardly claims that all youth must have an experience with employment, whether in apprenticeships or in transitional workshops that emulate real jobs. There, policies portray beneficiaries according to their capacity to undertake and finish apprenticeships.
Large-scale case control studies revealed a number of moderate risk - low frequency breast cancer alleles of the
and
genes. Some of these were reported as founder variants of Central and Eastern ...Europe. Based on highly similar founder variant spectra of the
in Poland and Latvia, we decided to test the frequency of other common variants of moderate breast cancer risk - c.509_510delGA (rs515726124) and c.172_175delTTGT (rs180177143) of the
gene and c.1667_1667+3delAGTA variant of the
gene in a breast cancer case-control series from Latvia to better understand the role of genes in susceptibility to breast cancer and their clinical significance.
The case-control study was performed based on an unselected breast cancer case group of 2480 women and a control group, including 1240 voluntary, to our knowledge unrelated, female donors without reported oncological disease.
The calculated frequency for c.509_510delGA of the
gene in the case group is 0.35 and 0.00% in the control group, with respective relative risk (RR) 7.18 (CI 95% 0.37-138.75;
= 0.19). As for the
c.172_175delTTGT variant, the frequency in the case group of our study is 0.04%. In the control group of our study all individuals were homozygous for the wild-type allele, which lead to calculated RR = 1.50 (CI 95% 0.06-36.83;
-value = 0.80). There were no carriers of the
variant c.1667_1667+3delAGTA identified in our case group and 2 heterozygotes were identified in the control group. The calculated RR = 0.26 (CI 95% 0.01-5.33;
-value = 0.38).
Results obtained for the
gene variants are able to supplement evidence on the allele frequency in breast cancer patients from the region of Central and Eastern Europe. Based on our results we cannot confirm the contribution of the
variant c.1667_1667+3delAGTA allele to breast cancer development.