•Influence Diagram accounts for uncertainties affecting lead time estimation.•Interactive tool to visually communicate probabilistic estimates to stakeholders.•Sensitivity analysis identified team ...velocity to most drastically impact lead time.•Cumulative distribution function can be an aid to prioritize project backlog queue.
This research introduces an influence diagram based approach to estimating task lead times for Agile Kanban project management. Derived from the principles of lean manufacturing, Agile methodologies including Scrum, Scrumban, and Kanban are common in the software industry and are spreading to other fields. Many teams estimate task delivery to better manage stakeholder expectations and improve decision making. However, the current technique involves calculating a team’s story point completion velocity, requiring hours of weekly effort while overlooking the addition, removal, and reprioritization of backlog tasks. While these factors may be critical in Scrum, teams practicing Kanban experience these conditions frequently due to the emphasis on continuous integration and reprioritization. Our alternative approach applies an influence diagram, or Bayesian belief network, to model the uncertainties affecting the lead time. To partially automate the estimation process, an influence diagram based expert system is developed and populated with data from a practicing Kanban team and used to generate a cumulative distribution function to facilitate the communication of probabilistic estimates. A sensitivity analysis is conducted to better understand how each factor influences lead times. This system can assist Kanban teams’ stakeholder communication and reduce estimation workload through a more holistic model of lead times.
Knowledge representation is at the very core of a radical idea for understanding intelligence. Instead of trying to understand or build brains from the bottom up, its goal is to understand and build ...intelligent behavior from the top down, putting the focus on what an agent needs to know in order to behave intelligently, how this knowledge can be represented symbolically, and how automated reasoning procedures can make this knowledge available as needed. This landmark text takes the central concepts of knowledge representation developed over the last 50 years and illustrates them in a lucid and compelling way. Each of the various styles of representation is presented in a simple and intuitive form, and the basics of reasoning with that representation are explained in detail. This approach gives readers a solid foundation for understanding the more advanced work found in the research literature. The presentation is clear enough to be accessible to a broad audience, including researchers and practitioners in database management, information retrieval, and object-oriented systems as well as artificial intelligence. This book provides the foundation in knowledge representation and reasoning that every AI practitioner needs. * Authors are well-recognized experts in the field who have applied the techniques to real-world problems * Presents the core ideas of KR in a simple straight forward approach, independent of the quirks of research systems * Offers the first true synthesis of the field in over a decade
The algorithms of machine learning, which can sift through vast numbers of variables looking for combinations that reliably predict outcomes, will improve prognosis, displace much of the work of ...radiologists and anatomical pathologists, and improve diagnostic accuracy.
By now, it’s almost old news: big data will transform medicine. It’s essential to remember, however, that data by themselves are useless. To be useful, data must be analyzed, interpreted, and acted on. Thus, it is algorithms — not data sets — that will prove transformative. We believe, therefore, that attention has to shift to new statistical tools from the field of machine learning that will be critical for anyone practicing medicine in the 21st century.
First, it’s important to understand what machine learning is not. Most computer-based algorithms in medicine are “expert systems” — rule sets encoding knowledge on . . .
The early and accurate identification of a disease is important for its effective treatment. However, medical errors represent a serious problem and pose a threat to patient safety. To this ...direction, appropriate and continuous education of the medical personnel has been widely recognized as an important mean to reduce medical errors and increase the quality of the health system. In this paper, we present MediExpert, an expert system targeting on continuous education of health personnel, providing also guidelines to persons that either cannot easily move due to age related comorbidities, or because they are away from healthcare units, further recommending users to talk with their doctors. It is based on differential diagnosis, employs ontologies for effective classification of health related problems and intelligent algorithms to enhance continuous education. We present the various components of the system and we elaborate on the benefits gained when using it for education.
Covid‐19 is an acute respiratory infection and presents various clinical features ranging from no symptoms to severe pneumonia and death. Medical expert systems, especially in diagnosis and ...monitoring stages, can give positive consequences in the struggle against Covid‐19. In this study, a rule‐based expert system is designed as a predictive tool in self‐pre‐diagnosis of Covid‐19. The potential users are smartphone users, healthcare experts and government health authorities. The system does not only share the data gathered from the users with experts, but also analyzes the symptom data as a diagnostic assistant to predict possible Covid‐19 risk. To do this, a user needs to fill out a patient examination card that conducts an online Covid‐19 diagnostic test, to receive an unconfirmed online test prediction result and a set of precautionary and supportive action suggestions. The system was tested for 169 positive cases. The results produced by the system were compared with the real PCR test results for the same cases. For patients with certain symptomatic findings, there was no significant difference found between the results of the system and the confirmed test results with PCR test. Furthermore, a set of suitable suggestions produced by the system were compared with the written suggestions of a collaborated health expert. The suggestions deduced and the written suggestions of the health expert were similar and the system suggestions in line with suggestions of the expert. The system can be suitable for diagnosing and monitoring of positive cases in the areas other than clinics and hospitals during the Covid‐19 pandemic. The results of the case studies are promising, and it demonstrates the applicability, effectiveness, and efficiency of the proposed approach in all communities.
•Novel feature selection approaches based on Binary Dragonfly Algorithm (BDA) are proposed.•Eight time varying S-shaped and V-shaped transfer functions are proposed.•The leverage of using ...time-varying transfer functions on exploration and exploitation behaviors is investigated.•Extensive tests are made to assess the proposed algorithms on the datasets to prove their merits.
Display omitted
The Dragonfly Algorithm (DA) is a recently proposed heuristic search algorithm that was shown to have excellent performance for numerous optimization problems. In this paper, a wrapper-feature selection algorithm is proposed based on the Binary Dragonfly Algorithm (BDA). The key component of the BDA is the transfer function that maps a continuous search space to a discrete search space. In this study, eight transfer functions, categorized into two families (S-shaped and V-shaped functions) are integrated into the BDA and evaluated using eighteen benchmark datasets obtained from the UCI data repository. The main contribution of this paper is the proposal of time-varying S-shaped and V-shaped transfer functions to leverage the impact of the step vector on balancing exploration and exploitation. During the early stages of the optimization process, the probability of changing the position of an element is high, which facilitates the exploration of new solutions starting from the initial population. On the other hand, the probability of changing the position of an element becomes lower towards the end of the optimization process. This behavior is obtained by considering the current iteration number as a parameter of transfer functions. The performance of the proposed approaches is compared with that of other state-of-art approaches including the DA, binary grey wolf optimizer (bGWO), binary gravitational search algorithm (BGSA), binary bat algorithm (BBA), particle swarm optimization (PSO), and genetic algorithm in terms of classification accuracy, sensitivity, specificity, area under the curve, and number of selected attributes. Results show that the time-varying S-shaped BDA approach outperforms compared approaches.
•Over 40 models for aspect-based sentiment analysis are summarized and classified.•Deep learning methods use fewer parameters but achieved comparative performance.•Deep learning is still in infancy, ...given challenges in data, domains and languages.•A task-combined and concept-centric approach should be considered in future studies.
The increasing volume of user-generated content on the web has made sentiment analysis an important tool for the extraction of information about the human emotional state. A current research focus for sentiment analysis is the improvement of granularity at aspect level, representing two distinct aims: aspect extraction and sentiment classification of product reviews and sentiment classification of target-dependent tweets. Deep learning approaches have emerged as a prospect for achieving these aims with their ability to capture both syntactic and semantic features of text without requirements for high-level feature engineering, as is the case in earlier methods. In this article, we aim to provide a comparative review of deep learning for aspect-based sentiment analysis to place different approaches in context.