Artificial intelligence systems, which are designed with a capability to learn from the data presented to them, are used throughout society. These systems are used to screen loan applicants, make ...sentencing recommendations for criminal defendants, scan social media posts for disallowed content and more. Because these systems do not assign meaning to their complex learned correlation network, they can learn associations that do not equate to causality, resulting in non-optimal and indefensible decisions being made. In addition to making decisions that are sub-optimal, these systems may create legal liability for their designers and operators by learning correlations that violate anti-discrimination and other laws regarding what factors can be used in different types of decision making. This paper presents the use of a machine learning expert system, which is developed with meaning-assigned nodes (facts) and correlations (rules). Multiple potential implementations are considered and evaluated under different conditions, including different network error and augmentation levels and different training levels. The performance of these systems is compared to random and fully connected networks.
•Presents new artificial intelligence technique based on using machine learning principles with a base expert system.•Describes how this can mitigate bias and other issues caused by the use of neural networks.•Characterizes system efficacy and performance.
This is the first part of a two-part paper that has arisen from the work of the IEEE Power Engineering Society's Multi-Agent Systems (MAS) Working Group. Part I of this paper examines the potential ...value of MAS technology to the power industry. In terms of contribution, it describes fundamental concepts and approaches within the field of multi-agent systems that are appropriate to power engineering applications. As well as presenting a comprehensive review of the meaningful power engineering applications for which MAS are being investigated, it also defines the technical issues which must be addressed in order to accelerate and facilitate the uptake of the technology within the power and energy sector. Part II of this paper explores the decisions inherent in engineering multi-agent systems for applications in the power and energy sector and offers guidance and recommendations on how MAS can be designed and implemented.
Incorporation of automated electrocardiogram (ECG) analysis techniques in home monitoring applications can ensure early detection of myocardial infarction (MI), thus reducing the risk of mortality. ...Most of the published techniques use advanced signal processing tools, a huge number of ECG features, and complex classifiers, which make their hardware implementation difficult. This paper proposes the use of harmonic phase distribution pattern of the ECG data for MI identification. The morphological and temporal changes of the ECG waveform caused by the presence of MI are reflected in the phase distribution pattern of the Fourier harmonics. Two discriminative features, clearly reflecting these variations, are identified for each of the three standard ECG leads (II, III, and V2). Classification of the healthy and MI data is performed using a threshold-based classification rule and logistic regression. The proposed technique has achieved an average detection accuracy of 95.6% with sensitivity and specificity of 96.5% and 92.7%, respectively, for classifying all types of MI data from the Physionet Physikalisch-Technische Bundesanstalt diagnostic ECG database. The robustness of the algorithm is confirmed with real data as well. The algorithm is also implemented and validated on a microcontroller-based Arduino board, which can serve as a prototype ECG analysis device. Apart from providing comparable performance to other reported techniques, the proposed technique provides distinct advantages in terms of computational simplicity of the features, significantly reduced feature dimension, and use of simple linear classifiers which ensure faster and easier MI identification.
•Semantic networks could be used to quantify convergence and divergence in design thinking.•Successful ideas exhibit divergence of semantic similarity and increased information content in ...time.•Client feedback enhances information content and divergence of successful ideas.•Information content and semantic similarity could be monitored for enhancement of user creativity.
Human creativity generates novel ideas to solve real-world problems. This thereby grants us the power to transform the surrounding world and extend our human attributes beyond what is currently possible. Creative ideas are not just new and unexpected, but are also successful in providing solutions that are useful, efficient and valuable. Thus, creativity optimizes the use of available resources and increases wealth. The origin of human creativity, however, is poorly understood, and semantic measures that could predict the success of generated ideas are currently unknown. Here, we analyze a dataset of design problem-solving conversations in real-world settings by using 49 semantic measures based on WordNet 3.1 and demonstrate that a divergence of semantic similarity, an increased information content, and a decreased polysemy predict the success of generated ideas. The first feedback from clients also enhances information content and leads to a divergence of successful ideas in creative problem solving. These results advance cognitive science by identifying real-world processes in human problem solving that are relevant to the success of produced solutions and provide tools for real-time monitoring of problem solving, student training and skill acquisition. A selected subset of information content (IC Sánchez–Batet) and semantic similarity (Lin/Sánchez–Batet) measures, which are both statistically powerful and computationally fast, could support the development of technologies for computer-assisted enhancements of human creativity or for the implementation of creativity in machines endowed with general artificial intelligence.
Nowadays, particle swarm optimisation (PSO) is one of the most commonly used optimisation techniques. However, PSO parameters significantly affect its computational behaviour. That is, while it ...exposes desirable computational behaviour with some settings, it does not behave so by some other settings, so the way for setting them is of high importance. This paper explains and discusses thoroughly about various existent strategies for setting PSO parameters, provides some hints for its parameter setting and presents some proposals for future research on this area. There exists no other paper in literature that discusses the setting process for all PSO parameters. Using the guidelines of this paper can be strongly useful for researchers in optimisation-related fields.
The explosive growth in volume, velocity, and diversity of data produced by mobile devices and cloud applications has contributed to the abundance of data or ‘big data.’ Available solutions for ...efficient data storage and management cannot fulfill the needs of such heterogeneous data where the amount of data is continuously increasing. For efficient retrieval and management, existing indexing solutions become inefficient with the rapidly growing index size and seek time and an optimized index scheme is required for big data. Regarding real-world applications, the indexing issue with big data in cloud computing is widespread in healthcare, enterprises, scientific experiments, and social networks. To date, diverse soft computing, machine learning, and other techniques in terms of artificial intelligence have been utilized to satisfy the indexing requirements, yet in the literature, there is no reported state-of-the-art survey investigating the performance and consequences of techniques for solving indexing in big data issues as they enter cloud computing. The objective of this paper is to investigate and examine the existing indexing techniques for big data. Taxonomy of indexing techniques is developed to provide insight to enable researchers understand and select a technique as a basis to design an indexing mechanism with reduced time and space consumption for BD-MCC. In this study, 48 indexing techniques have been studied and compared based on 60 articles related to the topic. The indexing techniques’ performance is analyzed based on their characteristics and big data indexing requirements. The main contribution of this study is taxonomy of categorized indexing techniques based on their method. The categories are non-artificial intelligence, artificial intelligence, and collaborative artificial intelligence indexing methods. In addition, the significance of different procedures and performance is analyzed, besides limitations of each technique. In conclusion, several key future research topics with potential to accelerate the progress and deployment of artificial intelligence-based cooperative indexing in BD-MCC are elaborated on.
Matrix factorization (MF) methods have proven as efficient and scalable approaches for collaborative filtering problems. Numerous existing MF methods rely heavily on explicit feedback. Typically, ...these data types may be extremely sparse; therefore, these methods can perform poorly. In order to address these challenges, we propose a latent factor model based on probabilistic MF, by incorporating implicit feedback as complementary information. Specifically, the explicit and implicit feedback matrices are decomposed into a shared subspace simultaneously. Then, the latent factor vectors are jointly optimized using a gradient descent algorithm. The experimental results using the MovieLens datasets demonstrate that the proposed algorithm outperforms the baselines.
Conversational artificial intelligence (AI) is changing the way mental health care is delivered. By gathering diagnostic information, facilitating treatment, and reviewing clinician behavior, ...conversational AI is poised to impact traditional approaches to delivering psychotherapy. While this transition is not disconnected from existing professional services, specific formulations of clinician-AI collaboration and migration paths between forms remain vague. In this viewpoint, we introduce four approaches to AI-human integration in mental health service delivery. To inform future research and policy, these four approaches are addressed through four dimensions of impact: access to care, quality, clinician-patient relationship, and patient self-disclosure and sharing. Although many research questions are yet to be investigated, we view safety, trust, and oversight as crucial first steps. If conversational AI isn't safe it should not be used, and if it isn't trusted, it won't be. In order to assess safety, trust, interfaces, procedures, and system level workflows, oversight and collaboration is needed between AI systems, patients, clinicians, and administrators.
► Weakness Finder is the first system to find products weakness from Chinese reviews. ► Implicit feature words are identified by collocation statistics based method. ► Explicit feature words are ...grouped together by applying semantic methods. ► Adverbs of degree has been taken into consideration in sentiment analysis.
Finding the weakness of the products from the customers’ feedback can help manufacturers improve their product quality and competitive strength. In recent years, more and more people express their opinions about products online, and both the feedback of manufacturers’ products or their competitors’ products could be easily collected. However, it’s impossible for manufacturers to read every review to analyze the weakness of their products. Therefore, finding product weakness from online reviews becomes a meaningful work. In this paper, we introduce such an expert system, Weakness Finder, which can help manufacturers find their product weakness from Chinese reviews by using aspects based sentiment analysis. An aspect is an attribute or component of a product, such as price, degerm, moisturizing are the aspects of the body wash products. Weakness Finder extracts the features and groups explicit features by using morpheme based method and Hownet based similarity measure, and identify and group the implicit features with collocation selection method for each aspect. Then utilize sentence based sentiment analysis method to determine the polarity of each aspect in sentences. The weakness of product could be found because the weakness is probably the most unsatisfied aspect in customers’ reviews, or the aspect which is more unsatisfied when compared with their competitor’s product reviews. Weakness Finder has been used to help a body wash manufacturer find their product weakness, and our experimental results demonstrate the good performance of the Weakness Finder.
•Project portfolio selection is a complex and difficult task in fuzzy environments.•A three-stage hybrid method is used to select an optimal combination of projects.•Data Envelopment Analysis is used ...to screen the available projects.•TOPSIS is used to rank the potentially promising projects.•Linear Integer Programming is used to select the most suitable project portfolio.
Project selection and resource allocation are critical issues in project-based organizations. These organizations are required to plan, evaluate, and control their projects in accordance with the organizational mission and objectives. In this study, we propose a three-stage hybrid method for selecting an optimal combination of projects. We obtain the maximum fitness between the final selection and the project initial rankings while considering various organizational objectives. The proposed model is comprised of three stages and each stage is composed of several steps and procedures. We use Data Envelopment Analysis (DEA) for the initial screening, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) for ranking the projects, and linear Integer Programming (IP) for selecting the most suitable project portfolio in a fuzzy environment according to organizational objectives. Finally, a case study is used to demonstrate the applicability of the proposed method and exhibit the efficacy of the algorithms and procedures.