When people make decisions, they usually rely on recommendations from friends and acquaintances. Although collaborative filtering (CF), the most popular recommendation technique, utilizes similar ...neighbors to generate recommendations, it does not distinguish friends in a neighborhood from strangers who have similar tastes. Because social networking Web sites now make it easy to gather social network information, a study about the use of social network information in making recommendations will probably produce productive results.
In this study, we developed a way to increase recommendation effectiveness by incorporating social network information into CF. We collected data about users’ preference ratings and their social network relationships from a social networking Web site. Then, we evaluated CF performance with diverse neighbor groups combining groups of friends and nearest neighbors. Our results indicated that more accurate prediction algorithms can be produced by incorporating social network information into CF.
This commentary on the JSIS Special Issue on datification focuses on two key themes selected from among the many topics discussed by Special Issue authors: 1) the debate over algorithmic intelligence ...versus human intelligence, and 2) the consequences of strategic performance systems.
The current algorithmic versus human intelligence debate echoes earlier discussions in our field about whether expert systems should replace or support human experts. As appealing as it is to assume that algorithms support expert workers, research suggests that people are not very effective at monitoring and overriding automation. In addition, post-automation work tends to evolve toward lower human knowledge and skill. These observations should caution datification researchers against simplistic theories and should guide researchers to study the multilevel sociotechnical conditions and stakeholders involved in the design, use, and consequences of algorithms in organizations.
Strategic performance measurement and ranking systems are also not new, but what is new is the belief that they will not just inform, but also transform, human behavior. In this respect, performance systems resemble the career tournaments that promote intense competition and create great inequality in executive pay and promotion. In addition, performance systems increasing serve as infrastructures, operating at multiple levels of analysis simultaneously. These observations imply that designing and studying such systems involve ethical choices, intensifying the demands on datification researchers.
Menopause is a period of women's life that has the especial physical, psychological and social challenges. So provision of an effective, practical and affordable way for meeting women's related needs ...is important. In addition, women should be able to incorporate such programs into their daily work. Considering the dearth of suitable services in this regard, this study will be conducted with the aim of designing, validating and evaluating the "Healthy Menopause" expert system on the management of menopausal symptoms.
A mixed methods exploratory design will be used to conduct this study in 3 phases. The first phase is a qualitative conventional content analysis study with purposes of exploring the women's experience of menopausal symptoms and extracting their needs, and collecting data about their expectations from a healthy menopause expert system.. The purposive sampling (In his phase data will be gathered through interviewing menopaused women aged 40 to 60 years old and other persons that have rich information in this regard and will be continued until data saturation. The second phase includes designing a healthy menopause expert system in this stage, the needs will be extracted from the qualitative findings along with a comprehensive literature review. The extracted needs will be again confirmed by the participants. Then, through a participatory approach (Participatory Design) using nominal group or Delphi technique the experts' opinion about the priority needs of menopaused women and related solutions will be explored based on the categories of identified needs. Such findings will be used to design a healthy menopause expert system at this stage. The third phase of study is a quantitative research in which the evaluation of the healthy menopause expert system will be done through a randomized controlled clinical trial with the aim of determining the effect of the healthy menopause expert system on the management of menopause symptoms by menopausal women themselves.
This is the first study that uses a mixed method approach for designing, validating and evaluating of the expert system "Healthy Menopause". This study will fill the research gap in the field of improving menopausal symptoms and designing a healthy menopause expert system based on the needs of the large group of menopause women. We hope that by applying this expert system, the menopausal women be empowered to management and improving their health with an easy and affordable manner.
Treating multiple health behavior risks on a population basis is one of the most promising approaches to enhancing health and reducing health care costs. Previous research demonstrated the efficacy ...of expert system interventions for three behaviors in a population of parents. The interventions provide individualized feedback that guides participants through the stages of change for each of their risk behaviors. This study extended that research to a more representative population of patients from primary care practice and to targeting of four rather than three behaviors.
Stage-based expert systems were applied to reduce smoking, improve diet, decrease sun exposure, and prevent relapse from regular mammography. A randomized clinical controlled trial recruited 69.2% of primary care patients (
N = 5407) at home via telephone. Three intervention contacts were delivered for each risk factor at 0, 6, and 12 months. The primary outcome measures were the percentages of at-risk patients at baseline who progressed to the action or maintenance stages at 24-month follow-up for each of the risk behaviors.
Significant treatment effects were found for each of the four behaviors, with 25.4% of intervention patients in action or maintenance for smoking, 28.8% for diet, and 23.4% for sun exposure. The treatment group had less relapse from regular mammography than the control group (6% vs. 10%).
Proactive, home-based, and stage-matched expert systems can produce relatively high population impacts on multiple behavior risks for cancer and other chronic diseases.
The ability to engage and retain players is perceived as a major factor in the success of games. However, the end-goal of retention differs between entertainment and serious contexts. For an ...entertainment game, engagement and retention are linked to monetization; for a serious game, this needs to persist for as long as is required for learning or behavioral objectives to be met. User engagement is strongest when a balance is achieved between difficulty and skill, leading to a state of "flow." Hence, adapting difficulty could lead to increased and sustained engagement. Implementing this requires the identification of variables linked to mechanics, manipulated based upon a player performance model. In some cases, this is possible by adjusting simple properties of objects, though more comprehensive solutions require extending or adapting content applying procedural techniques. This paper proposes a six step plan, validated against two case studies: an existing serious game, with easily manipulated parameters, and a platformer game built from scratch, where additional content is required, showing the process for different mechanics. To explore limitations, the results of two small-scale user evaluations with 45 users in total, are reported, contributing to the understanding of how adaptive difficulty might be implemented and received.
Landslide susceptibility maps are useful tools for risk analysis and assessment with practical implications because they provide relevant information for territorial planning, land use sustainable ...management or even forecast and early warning systems. Achievement of accurate assessments of landslide susceptibility for large regions (i.e. including national territories) is still a challenge, mainly because of the lack of proper landslide inventory and monitoring data. Romania represents one of the most landslide-affected countries in Europe. The current study presents an approach for drawing the landslide susceptibility map at a national scale for the Romanian territory, in agreement with the European methodological framework promoted for small-scale evaluations of landslide susceptibility. The methodological approach was adapted to the specific mophostructural, climatic and landuse conditions of the country, as well as to the quantity and quality of the available data, in order to achieve a susceptibility zonation for slides and flows for the national territory. It follows a mixed statistical-heuristic approach based on a Spatial Multi-Criteria Evaluation (SMCE) procedure integrating both landslide information and expert knowledge. The national landslide susceptibility map outlines large areas ranked as having high and very high susceptibility throughout the Subcarpathian chain, the Moldavian and Transylvanian Plateaux and the Getic Piedmont. The prediction performance was examined quantitatively and qualitatively, by making use of regional geomorphical knowledge. The evaluations suggest that, despite uncertainties inherent at this analysis scale, spatially-differentiated models are able to better capture landslide conditioning frameworks and reproduce inter- and, especially, intraregional variability of landslide distribution as compared to a previous version of the national susceptibility map. The study proves that combining statistical and heuristic approaches, calibrated and later on validated for distinct homogeneous morpho-litho-structural units allows to increase the prediction capacity of the national-scale model. The results are useful to public authorities at national, regional, county and municipality levels, providing knowledge for the enhancement of disaster prevention and response plans.
•A national-scale slide and flow susceptibility zonation is achieved for Romania.•The statistical-heuristic approach proves the most suitable for the varied relief.•Analyses are carried out per morpho-structurally defined regions in Romania.•Model calibration allows capturing the morphostructural and lithological diversity.•Intraregional variability of landslide distribution is accurately reproduced
•Paper presents ensemble based system for the prediction of number of software faults.•System is based on the heterogeneous ensemble method.•System uses three fault prediction techniques as base ...learners for the ensemble.•Results are verified on Eclipse datasets.
Software fault prediction using different techniques has been done by various researchers previously. It is observed that the performance of these techniques varied from dataset to dataset, which make them inconsistent for fault prediction in the unknown software project. On the other hand, use of ensemble method for software fault prediction can be very effective, as it takes the advantage of different techniques for the given dataset to come up with better prediction results compared to individual technique. Many works are available on binary class software fault prediction (faulty or non-faulty prediction) using ensemble methods, but the use of ensemble methods for the prediction of number of faults has not been explored so far. The objective of this work is to present a system using the ensemble of various learning techniques for predicting the number of faults in given software modules. We present a heterogeneous ensemble method for the prediction of number of faults and use a linear combination rule and a non-linear combination rule based approaches for the ensemble. The study is designed and conducted for different software fault datasets accumulated from the publicly available data repositories. The results indicate that the presented system predicted number of faults with higher accuracy. The results are consistent across all the datasets. We also use prediction at level l (Pred(l)), and measure of completeness to evaluate the results. Pred(l) shows the number of modules in a dataset for which average relative error value is less than or equal to a threshold value l. The results of prediction at level l analysis and measure of completeness analysis have also confirmed the effectiveness of the presented system for the prediction of number of faults. Compared to the single fault prediction technique, ensemble methods produced improved performance for the prediction of number of software faults. Main impact of this work is to allow better utilization of testing resources helping in early and quick identification of most of the faults in the software system.
This open access book systematically investigates the topic of entity alignment, which aims to detect equivalent entities that are located in different knowledge graphs. Entity alignment represents ...an essential step in enhancing the quality of knowledge graphs, and hence is of significance to downstream applications, e.g., question answering and recommender systems. Recent years have witnessed a rapid increase in the number of entity alignment frameworks, while the relationships among them remain unclear. This book aims to fill that gap by elaborating the concept and categorization of entity alignment, reviewing recent advances in entity alignment approaches, and introducing novel scenarios and corresponding solutions. Specifically, the book includes comprehensive evaluations and detailed analyses of state-of-the-art entity alignment approaches and strives to provide a clear picture of the strengths and weaknesses of the currently available solutions, so as to inspire follow-up research. In addition, it identifies novel entity alignment scenarios and explores the issues of large-scale data, long-tail knowledge, scarce supervision signals, lack of labelled data, and multimodal knowledge, offering potential directions for future research. The book offers a valuable reference guide for junior researchers, covering the latest advances in entity alignment, and a valuable asset for senior researchers, sharing novel entity alignment scenarios and their solutions. Accordingly, it will appeal to a broad audience in the fields of knowledge bases, database management, artificial intelligence and big data.
As automated vehicles receive more attention from the media, there has been an equivalent increase in the coverage of the ethical choices a vehicle may be forced to make in certain crash situations ...with no clear safe outcome. Much of this coverage has focused on a philosophical thought experiment known as the "trolley problem," and substituting an automated vehicle for the trolley and the car's software for the bystander. While this is a stark and straightforward example of ethical decision making for an automated vehicle, it risks marginalizing the entire field if it is to become the only ethical problem in the public's mind. In this chapter, I discuss the shortcomings of the trolley problem, and introduce more nuanced examples that involve crash risk and uncertainty. Risk management is introduced as an alternative approach, and its ethical dimensions are discussed.
Active machine learning puts artificial intelligence in charge of a sequential, feedback-driven discovery process. We present the application of a multi-objective active learning scheme for ...identifying small molecules that inhibit the protein-protein interaction between the anti-cancer target CXC chemokine receptor 4 (CXCR4) and its endogenous ligand CXCL-12 (SDF-1). Experimental design by active learning was used to retrieve informative active compounds that continuously improved the adaptive structure-activity model. The balanced character of the compound selection function rapidly delivered new molecular structures with the desired inhibitory activity and at the same time allowed us to focus on informative compounds for model adjustment. The results of our study validate active learning for prospective ligand finding by adaptive, focused screening of large compound repositories and virtual compound libraries.