In recent years the number of active controllable joints in electrically powered hand-prostheses has increased significantly. However, the control strategies for these devices in current clinical use ...are inadequate as they require separate and sequential control of each degree-of-freedom (DoF). In this study we systematically compare linear and nonlinear regression techniques for an independent, simultaneous and proportional myoelectric control of wrist movements with two DoF. These techniques include linear regression, mixture of linear experts (ME), multilayer-perceptron, and kernel ridge regression (KRR). They are investigated offline with electro-myographic signals acquired from ten able-bodied subjects and one person with congenital upper limb deficiency. The control accuracy is reported as a function of the number of electrodes and the amount and diversity of training data providing guidance for the requirements in clinical practice. The results showed that KRR, a nonparametric statistical learning method, outperformed the other methods. However, simple transformations in the feature space could linearize the problem, so that linear models could achieve similar performance as KRR at much lower computational costs. Especially ME, a physiologically inspired extension of linear regression represents a promising candidate for the next generation of prosthetic devices.
This commentary on the JSIS Special Issue on datification focuses on two key themes selected from among the many topics discussed by Special Issue authors: 1) the debate over algorithmic intelligence ...versus human intelligence, and 2) the consequences of strategic performance systems.
The current algorithmic versus human intelligence debate echoes earlier discussions in our field about whether expert systems should replace or support human experts. As appealing as it is to assume that algorithms support expert workers, research suggests that people are not very effective at monitoring and overriding automation. In addition, post-automation work tends to evolve toward lower human knowledge and skill. These observations should caution datification researchers against simplistic theories and should guide researchers to study the multilevel sociotechnical conditions and stakeholders involved in the design, use, and consequences of algorithms in organizations.
Strategic performance measurement and ranking systems are also not new, but what is new is the belief that they will not just inform, but also transform, human behavior. In this respect, performance systems resemble the career tournaments that promote intense competition and create great inequality in executive pay and promotion. In addition, performance systems increasing serve as infrastructures, operating at multiple levels of analysis simultaneously. These observations imply that designing and studying such systems involve ethical choices, intensifying the demands on datification researchers.
Treating multiple health behavior risks on a population basis is one of the most promising approaches to enhancing health and reducing health care costs. Previous research demonstrated the efficacy ...of expert system interventions for three behaviors in a population of parents. The interventions provide individualized feedback that guides participants through the stages of change for each of their risk behaviors. This study extended that research to a more representative population of patients from primary care practice and to targeting of four rather than three behaviors.
Stage-based expert systems were applied to reduce smoking, improve diet, decrease sun exposure, and prevent relapse from regular mammography. A randomized clinical controlled trial recruited 69.2% of primary care patients (
N = 5407) at home via telephone. Three intervention contacts were delivered for each risk factor at 0, 6, and 12 months. The primary outcome measures were the percentages of at-risk patients at baseline who progressed to the action or maintenance stages at 24-month follow-up for each of the risk behaviors.
Significant treatment effects were found for each of the four behaviors, with 25.4% of intervention patients in action or maintenance for smoking, 28.8% for diet, and 23.4% for sun exposure. The treatment group had less relapse from regular mammography than the control group (6% vs. 10%).
Proactive, home-based, and stage-matched expert systems can produce relatively high population impacts on multiple behavior risks for cancer and other chronic diseases.
We consider positive rules in which the conclusion may contain existentially quantified variables, which makes reasoning tasks (such as conjunctive query answering or entailment) undecidable. These ...rules, called ∀∃-rules, have the same logical form as tuple-generating dependencies in databases and as conceptual graph rules. The aim of this paper is to provide a clearer picture of the frontier between decidability and non-decidability of reasoning with these rules. Previous known decidable classes were based on forward chaining. On the one hand we extend these classes, on the other hand we introduce decidable classes based on backward chaining. A side result is the definition of a backward mechanism that takes the complex structure of ∀∃-rule conclusions into account. We classify all known decidable classes by inclusion. Then, we study the question of whether the union of two decidable classes remains decidable and show that the answer is negative, except for one class and a still open case. This highlights the interest of studying interactions between rules. We give a constructive definition of dependencies between rules and widen the landscape of decidable classes with conditions on rule dependencies and a mixed forward/backward chaining mechanism. Finally, we integrate rules with equality and negative constraints to our framework.
The ability to engage and retain players is perceived as a major factor in the success of games. However, the end-goal of retention differs between entertainment and serious contexts. For an ...entertainment game, engagement and retention are linked to monetization; for a serious game, this needs to persist for as long as is required for learning or behavioral objectives to be met. User engagement is strongest when a balance is achieved between difficulty and skill, leading to a state of "flow." Hence, adapting difficulty could lead to increased and sustained engagement. Implementing this requires the identification of variables linked to mechanics, manipulated based upon a player performance model. In some cases, this is possible by adjusting simple properties of objects, though more comprehensive solutions require extending or adapting content applying procedural techniques. This paper proposes a six step plan, validated against two case studies: an existing serious game, with easily manipulated parameters, and a platformer game built from scratch, where additional content is required, showing the process for different mechanics. To explore limitations, the results of two small-scale user evaluations with 45 users in total, are reported, contributing to the understanding of how adaptive difficulty might be implemented and received.
•We focused on keyword strategies for applying text-mining to patent data.•Four factors were evaluated through k-means clustering and entropy values.•Using an abstract based on a TF–IDF represent the ...best keyword selection strategy.•Using Boolean expression represents the best keyword processing strategy.
Previous studies have applied various methodologies to analyze patent data for technology management, given the advances in data analysis techniques available. In particular, efforts have recently been made to use text-mining (i.e. extracting keywords from patent documents) for patent analysis purposes. The results of these studies may be affected by the keywords selected from the relevant documents – but, despite its importance, the existing literature has seldom explored strategies for selecting and processing keywords from patent documents.
The purpose of this research is to fill this research gap by focusing on keyword strategies for applying text-mining to patent data. Specifically, four factors are addressed; (1) which element of the patent documents to adopt for keyword selection, (2) what keyword selection methods to use, (3) how many keywords to select, and (4) how to transform the keyword selection results into an analyzable data format. An experiment based on an orthogonal array of the four factors was designed in order to identify the best strategy, in which the four factors were evaluated and compared through k-means clustering and entropy values. The research findings are expected to offer useful guidelines for how to select and process keywords for patent analysis, and so further increase the reliability and validity of research using text-mining for patent analysis.
Landslide susceptibility maps are useful tools for risk analysis and assessment with practical implications because they provide relevant information for territorial planning, land use sustainable ...management or even forecast and early warning systems. Achievement of accurate assessments of landslide susceptibility for large regions (i.e. including national territories) is still a challenge, mainly because of the lack of proper landslide inventory and monitoring data. Romania represents one of the most landslide-affected countries in Europe. The current study presents an approach for drawing the landslide susceptibility map at a national scale for the Romanian territory, in agreement with the European methodological framework promoted for small-scale evaluations of landslide susceptibility. The methodological approach was adapted to the specific mophostructural, climatic and landuse conditions of the country, as well as to the quantity and quality of the available data, in order to achieve a susceptibility zonation for slides and flows for the national territory. It follows a mixed statistical-heuristic approach based on a Spatial Multi-Criteria Evaluation (SMCE) procedure integrating both landslide information and expert knowledge. The national landslide susceptibility map outlines large areas ranked as having high and very high susceptibility throughout the Subcarpathian chain, the Moldavian and Transylvanian Plateaux and the Getic Piedmont. The prediction performance was examined quantitatively and qualitatively, by making use of regional geomorphical knowledge. The evaluations suggest that, despite uncertainties inherent at this analysis scale, spatially-differentiated models are able to better capture landslide conditioning frameworks and reproduce inter- and, especially, intraregional variability of landslide distribution as compared to a previous version of the national susceptibility map. The study proves that combining statistical and heuristic approaches, calibrated and later on validated for distinct homogeneous morpho-litho-structural units allows to increase the prediction capacity of the national-scale model. The results are useful to public authorities at national, regional, county and municipality levels, providing knowledge for the enhancement of disaster prevention and response plans.
•A national-scale slide and flow susceptibility zonation is achieved for Romania.•The statistical-heuristic approach proves the most suitable for the varied relief.•Analyses are carried out per morpho-structurally defined regions in Romania.•Model calibration allows capturing the morphostructural and lithological diversity.•Intraregional variability of landslide distribution is accurately reproduced
•Paper presents ensemble based system for the prediction of number of software faults.•System is based on the heterogeneous ensemble method.•System uses three fault prediction techniques as base ...learners for the ensemble.•Results are verified on Eclipse datasets.
Software fault prediction using different techniques has been done by various researchers previously. It is observed that the performance of these techniques varied from dataset to dataset, which make them inconsistent for fault prediction in the unknown software project. On the other hand, use of ensemble method for software fault prediction can be very effective, as it takes the advantage of different techniques for the given dataset to come up with better prediction results compared to individual technique. Many works are available on binary class software fault prediction (faulty or non-faulty prediction) using ensemble methods, but the use of ensemble methods for the prediction of number of faults has not been explored so far. The objective of this work is to present a system using the ensemble of various learning techniques for predicting the number of faults in given software modules. We present a heterogeneous ensemble method for the prediction of number of faults and use a linear combination rule and a non-linear combination rule based approaches for the ensemble. The study is designed and conducted for different software fault datasets accumulated from the publicly available data repositories. The results indicate that the presented system predicted number of faults with higher accuracy. The results are consistent across all the datasets. We also use prediction at level l (Pred(l)), and measure of completeness to evaluate the results. Pred(l) shows the number of modules in a dataset for which average relative error value is less than or equal to a threshold value l. The results of prediction at level l analysis and measure of completeness analysis have also confirmed the effectiveness of the presented system for the prediction of number of faults. Compared to the single fault prediction technique, ensemble methods produced improved performance for the prediction of number of software faults. Main impact of this work is to allow better utilization of testing resources helping in early and quick identification of most of the faults in the software system.
As automated vehicles receive more attention from the media, there has been an equivalent increase in the coverage of the ethical choices a vehicle may be forced to make in certain crash situations ...with no clear safe outcome. Much of this coverage has focused on a philosophical thought experiment known as the "trolley problem," and substituting an automated vehicle for the trolley and the car's software for the bystander. While this is a stark and straightforward example of ethical decision making for an automated vehicle, it risks marginalizing the entire field if it is to become the only ethical problem in the public's mind. In this chapter, I discuss the shortcomings of the trolley problem, and introduce more nuanced examples that involve crash risk and uncertainty. Risk management is introduced as an alternative approach, and its ethical dimensions are discussed.
•We propose a novel many-objective clustering algorithm for categorical data.•Our method can take advantage of different cluster validity indices simultaneously.•Two versions of the proposed ...algorithm are presented with and without cluster number.•The finding can be instructive for solving other real-world optimization problems.
Categorical data clustering algorithms, in contrast to numerical ones, are still in their infancy despite some algorithms have been proposed in the literature. It is known that many clustering algorithms are posed as optimization problems, where internal cluster validity functions are utilized as the objectives to find the optimal partitions. However, most of these methods consider a single criterion that can merely be applied to detect the particular structure/distribution of data. To overcome this issue, in this paper, a novel many objective fuzzy centroids clustering algorithms is proposed for categorical data using reference point based non-dominated sorting genetic algorithm, which simultaneously optimizes several cluster validity indices. In our work, an effective fuzzy centroids algorithm is employed to design the proposed approach, which is different from other contestant k-modes-type methods. Here, the fuzzy memberships are used for chromosome representation that combines with a novel genetic operation to produce new solutions. Moreover, a variable-length encoding scheme is developed for the sake of finding the clusters without knowing any prior knowledge. Experiments on several data sets demonstrate the superiority of the proposed algorithm over other state-of-the-art methods in terms of clustering accuracy and stability. On the other hand, our method can detect the cluster number if not predefined along with a desirable clustering solution.