Team cognition has been identified as a critical component of team performance and decision-making. However, theory and research in this domain continues to remain largely static; articulation and ...examination of the dynamic processes through which collectively held knowledge emerges from the individual- to the team-level is lacking. To address this gap, we advance and systematically evaluate a process-oriented theory of team knowledge emergence. First, we summarize the core concepts and dynamic mechanisms that underlie team knowledge-building and represent our theory of team knowledge emergence (Step 1). We then translate this narrative theory into a formal computational model that provides an explicit specification of how these core concepts and mechanisms interact to produce emergent team knowledge (Step 2). The computational model is next instantiated into an agent-based simulation to explore how the key generative process mechanisms described in our theory contribute to improved knowledge emergence in teams (Step 3). Results from the simulations demonstrate that agent teams generate collectively shared knowledge more effectively when members are capable of processing information more efficiently and when teams follow communication strategies that promote equal rates of information sharing across members. Lastly, we conduct an empirical experiment with real teams participating in a collective knowledge-building task to verify that promoting these processes in human teams also leads to improved team knowledge emergence (Step 4). Discussion focuses on implications of the theory for examining team cognition processes and dynamics as well as directions for future research.
Sparse Hessian matrices occur often in statistics, and their fast and accurate estimation can improve efficiency of numerical optimization and sampling algorithms. By exploiting the known sparsity ...pattern of a Hessian, methods in the sparseHessianFD package require many fewer function or gradient evaluations than would be required if the Hessian were treated as dense. The package implements established graph coloring and linear substitution algorithms that were previously unavailable to R users, and is most useful when other numerical, symbolic or algorithmic methods are impractical, inefficient or unavailable.
Endometrial cancer is the most common gynecologic malignancy. It is the fourth most common cancer in women in the United States after breast, lung, and colorectal cancers. Risk factors are related to ...excessive unopposed exposure of the endometrium to estrogen, including unopposed estrogen therapy, early menarche, late menopause, tamoxifen therapy, nulliparity, infertility or failure to ovulate, and polycystic ovary syndrome. Additional risk factors are increasing age, obesity, hypertension, diabetes mellitus, and hereditary nonpolyposis colorectal cancer. The most common presentation for endometrial cancer is postmenopausal bleeding. The American Cancer Society recommends that all women older than 65 years be informed of the risks and symptoms of endometrial cancer and advised to seek evaluation if symptoms occur. There is no evidence to support endometrial cancer screening in asymptomatic women. Evaluation of a patient with suspected disease should include a pregnancy test in women of childbearing age, complete blood count, and prothrombin time and partial thromboplastin time if bleeding is heavy. Most guidelines recommend either transvaginal ultrasonography or endometrial biopsy as the initial study. The mainstay of treatment for endometrial cancer is total hysterectomy with bilateral salpingo-oophorectomy. Radiation and chemotherapy can also play a role in treatment. Low- to medium-risk endometrial hyperplasia can be treated with nonsurgical options. Survival is generally defined by the stage of the disease and histology, with most patients at stage I and II having a favorable prognosis. Controlling risk factors such as obesity, diabetes, and hypertension could play a role in the prevention of endometrial cancer.
The modular multilevel converter (MMC) is an upcoming topology for high-power drive applications especially in the medium voltage range. This paper presents the design process of a holistic control ...system for a MMC to feed variable-speed drives. First, the design of the current control for the independent adjustment of several current components is derived from the analysis of the equivalent circuits. Second, the current and voltage components for balancing the energies in the arms of the MMC are identified systematically by the investigation of the transformed arm power components. These fundamentals lead to the design of the cascaded control structure, which allows the balancing task in the whole operating range of a three-phase machine. The control system ensures a dynamic balancing of the energies in the cells of the MMC at minimum necessary internal currents over the complete frequency range. Simultaneously, all other circulating current components are avoided to minimize current stress and additional voltage pulsations. The performance of the control system is finally validated by measurements on a low-voltage MMC prototype, which feeds a field-oriented controlled induction machine.
We consider four scenarios that can unfold when organizations either innovate or respond rigidly to organizational decline. Two of the scenarios are downward spirals that threaten an organization ...with possible death, and two of the scenarios are turnarounds. These scenarios are important because they can determine the fate of an organization—survival or death. We explore the conditions under which each of these scenarios is likely to emerge, developing original theory and specifying propositions about those conditions. In developing this theoretical framework, we distinguish between flexible and inflexible innovations as factors in turnaround success or failure. Our model extends current theory on organizational decline to highlight the feedback effects of the consequences of decline and to explain the circumstances in which particular feedback effects are likely to occur.
Dominance analysis (DA) has been established as a useful tool for practitioners and researchers to identify the relative importance of predictors in a linear regression. This article examines the ...joint impact of two common and pervasive artifacts-sampling error variance and measurement unreliability-on the accuracy of DA. We present Monte Carlo simulations that detail the decrease in the accuracy of DA in the presence of these artifacts, highlighting the practical extent of the inferential mistakes that can be made. Then, we detail and provide a user-friendly program in R (R Core Team, 2017) for estimating the effects of sampling error variance and unreliability on DA. Finally, by way of a detailed example, we provide specific recommendations for how researchers and practitioners should more appropriately interpret and report results of DA.
The African National Congress (ANC) has been an electorally dominant party in South African politics since 1994, with its vote share peaking in 2004 before falling to a low in the most recent 2019 ...general elections. Simultaneously, there have been much sharper declines in levels of ANC partisanship and assessments of government performance among the party's own voters. This presents a puzzle: Why do a significant share of ANC voters continue to support a party that they do not 'feel close' to and do not believe is adequately managing the economy or the delivery of public goods? Based upon original qualitative data from semi-structured interviews with 111 intended ANC voters, I argue that there is a sizeable portion of ANC voters whose connection to the party is characterised by a conditional loyalty that falls short of a more thoroughgoing partisanship. The persistence of 'thin' loyalty alongside habitual and strategic voting for the ANC - especially among the 'born free' generation - has obfuscated the extent of decline in the perceived efficacy of voting and overall satisfaction with the outcomes of democratic politics.
Phylogenomics, the use of large-scale data matrices in phylogenetic analyses, has been viewed as the ultimate solution to the problem of resolving difficult nodes in the tree of life. However, it has ...become clear that analyses of these large genomic data sets can also result in conflicting estimates of phylogeny. Here, we use the early divergences in Neoaves, the largest clade of extant birds, as a "model system" to understand the basis for incongruence among phylogenomic trees. We were motivated by the observation that trees from two recent avian phylogenomic studies exhibit conflicts. Those studies used different strategies: 1) collecting many characters ∼42 mega base pairs (Mbp) of sequence data from 48 birds, sometimes including only one taxon for each major clade; and 2) collecting fewer characters (∼0.4 Mbp) from 198 birds, selected to subdivide long branches. However, the studies also used different data types: the taxon-poor data matrix comprised 68% non-coding sequences whereas coding exons dominated the taxon-rich data matrix. This difference raises the question of whether the primary reason for incongruence is the number of sites, the number of taxa, or the data type. To test among these alternative hypotheses we assembled a novel, large-scale data matrix comprising 90% non-coding sequences from 235 bird species. Although increased taxon sampling appeared to have a positive impact on phylogenetic analyses the most important variable was data type. Indeed, by analyzing different subsets of the taxa in our data matrix we found that increased taxon sampling actually resulted in increased congruence with the tree from the previous taxon-poor study (which had a majority of non-coding data) instead of the taxon-rich study (which largely used coding data). We suggest that the observed differences in the estimates of topology for these studies reflect data-type effects due to violations of the models used in phylogenetic analyses, some of which may be difficult to detect. If incongruence among trees estimated using phylogenomic methods largely reflects problems with model fit developing more "biologically-realistic" models is likely to be critical for efforts to reconstruct the tree of life.
Contemporary definitions of leadership advance a view of the phenomenon as relational, situated in specific social contexts, involving patterned emergent processes, and encompassing both formal and ...informal influence. Paralleling these views is a growing interest in leveraging social network approaches to study leadership. Social network approaches provide a set of theories and methods with which to articulate and investigate, with greater precision and rigor, the wide variety of relational perspectives implied by contemporary leadership theories. Our goal is to advance this domain through an integrative conceptual review. We begin by answering the question of why-Why adopt a network approach to study leadership? Then, we offer a framework for organizing prior research. Our review reveals 3 areas of research, which we term: (a) leadership in networks, (b) leadership as networks, and (c) leadership in and as networks. By clarifying the conceptual underpinnings, key findings, and themes within each area, this review serves as a foundation for future inquiry that capitalizes on, and programmatically builds upon, the insights of prior work. Our final contribution is to advance an agenda for future research that harnesses the confluent ideas at the intersection of leadership in and as networks. Leadership in and as networks represents a paradigm shift in leadership research-from an emphasis on the static traits and behaviors of formal leaders whose actions are contingent upon situational constraints, toward an emphasis on the complex and patterned relational processes that interact with the embedding social context to jointly constitute leadership emergence and effectiveness.
Trust region algorithms are nonlinear optimization tools that tend to be stable and reliable when the objective function is non-concave, ill-conditioned, or exhibits regions that are nearly flat. ...Additionally, most freely-available optimization routines do not exploit the sparsity of the Hessian when such sparsity exists, as in log posterior densities of Bayesian hierarchical models. The trustOptim package for the R programming language addresses both of these issues. It is intended to be robust, scalable and efficient for a large class of nonlinear optimization problems that are often encountered in statistics, such as finding posterior modes. The user must supply the objective function, gradient and Hessian. However, when used in conjunction with the sparseHessianFD package, the user does not need to supply the exact sparse Hessian, as long as the sparsity structure is known in advance. For models with a large number of parameters, but for which most of the cross-partial derivatives are zero (i.e., the Hessian is sparse), trustOptim offers dramatic performance improvements over existing options, in terms of computational time and memory footprint.