A global field experiment with Seeking Alpha shows that textual complexity affects investor attention to news and market outcomes. Investors were randomly assigned different titles for the same news ...article. Holding the article fixed, a one-standard-deviation increase in complexity leads to 6.1% fewer views. Complexity is more off-putting for less-sophisticated investors, when attention is more limited, and when the news is likely less important. Exploiting an arbitrary rule for breaking ties between tested titles, I find that title complexity affects markets—lowering announcement turnover and volatility.
•Textual complexity reduces investor attention to news•Less-sophisticated investors are more negatively affected by complexity•Complexity matters more when investors' attention is more limited•Complexity aversion declines when news is likely more interesting•Complexity reduces announcement turnover and volatility
In this paper, a new production, allocation, location, inventory holding, distribution, and flow problems for a new sustainable-resilient health care network related to the COVID-19 pandemic under ...uncertainty is developed that also integrated sustainability aspects and resiliency concepts. Then, a multi-period, multi-product, multi-objective, and multi-echelon mixed-integer linear programming model for the current network is formulated and designed. Formulating a new MILP model to design a sustainable-resilience healthcare network during the COVID-19 pandemic and developing three hybrid meta-heuristic algorithms are among the most important contributions of this research. In order to estimate the values of the required demand for medicines, the simulation approach is employed. To cope with uncertain parameters, stochastic chance-constraint programming is proposed. This paper also proposed three meta-heuristic methods including Multi-Objective Teaching–learning-based optimization (TLBO), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA) to find Pareto solutions. Since heuristic approaches are sensitive to input parameters, the Taguchi approach is suggested to control and tune the parameters. A comparison is performed by using eight assessment metrics to validate the quality of the obtained Pareto frontier by the heuristic methods on the experiment problems. To validate the current model, a set of sensitivity analysis on important parameters and a real case study in the United States are provided. Based on the empirical experimental results, computational time and eight assessment metrics proposed methodology seems to work well for the considered problems. The results show that by raising the transportation costs, the total cost and the environmental impacts of sustainability increased steadily and the trend of the social responsibility of staff rose gradually between − 20 and 0%, but, dropped suddenly from 0 to + 20%. Also in terms of the on-resiliency of the proposed network, the trends climbed slightly and steadily. Applications of this paper can be useful for hospitals, pharmacies, distributors, medicine manufacturers and the Ministry of Health.
•We introduce a new general “multi-stage” hyper-heuristic framework.•The framework enables the use of multiple hyper-heuristics at different stages.•We propose a multi-stage hyper-heuristic based on ...the designed framework.•The approach outperformed the state-of-the-art approach from CHeSC2011.•The overall performance outperformed performance of each constituent hyper-heuristic.
There is a growing interest towards the design of reusable general purpose search methods that are applicable to different problems instead of tailored solutions to a single particular problem. Hyper-heuristics have emerged as such high level methods that explore the space formed by a set of heuristics (move operators) or heuristic components for solving computationally hard problems. A selection hyper-heuristic mixes and controls a predefined set of low level heuristics with the goal of improving an initially generated solution by choosing and applying an appropriate heuristic to a solution in hand and deciding whether to accept or reject the new solution at each step under an iterative framework. Designing an adaptive control mechanism for the heuristic selection and combining it with a suitable acceptance method is a major challenge, because both components can influence the overall performance of a selection hyper-heuristic. In this study, we describe a novel iterated multi-stage hyper-heuristic approach which cycles through two interacting hyper-heuristics and operates based on the principle that not all low level heuristics for a problem domain would be useful at any point of the search process. The empirical results on a hyper-heuristic benchmark indicate the success of the proposed selection hyper-heuristic across six problem domains beating the state-of-the-art approach.
Software Defined Networking (SDN) marks a paradigm shift towards an externalized and logically centralized network control plane. A particularly important task in SDN architectures is that of ...controller placement, i.e., the positioning of a limited number of resources within a network to meet various requirements. These requirements range from latency constraints to failure tolerance and load balancing. In most scenarios, at least some of these objectives are competing, thus no single best placement is available and decision makers need to find a balanced trade-off. This work presents POCO, a framework for Pareto-based Optimal COntroller placement that provides operators with Pareto optimal placements with respect to different performance metrics. In its default configuration, POCO performs an exhaustive evaluation of all possible placements. While this is practically feasible for small and medium sized networks, realistic time and resource constraints call for an alternative in the context of large scale networks or dynamic networks whose properties change over time. For these scenarios, the POCO toolset is extended by a heuristic approach that is less accurate, but yields faster computation times. An evaluation of this heuristic is performed on a collection of real world network topologies from the Internet Topology Zoo. Utilizing a measure for quantifying the error introduced by the heuristic approach allows an analysis of the resulting trade-off between time and accuracy. Additionally, the proposed methods can be extended to solve similar virtual functions placement problems which appear in the context of Network Functions Virtualization (NFV).
This paper introduces a novel population-based bio-inspired meta-heuristic optimization algorithm, called Blood Coagulation Algorithm (BCA). BCA derives inspiration from the process of blood ...coagulation in the human body. The underlying concepts and ideas behind the proposed algorithm are the cooperative behavior of thrombocytes and their intelligent strategy of clot formation. These behaviors are modeled and utilized to underscore intensification and diversification in a given search space. A comparison with various state-of-the-art meta-heuristic algorithms over a test suite of 23 renowned benchmark functions reflects the efficiency of BCA. An extensive investigation is conducted to analyze the performance, convergence behavior and computational complexity of BCA. The comparative study and statistical test analysis demonstrate that BCA offers very competitive and statistically significant results compared to other eminent meta-heuristic algorithms. Experimental results also show the consistent performance of BCA in high dimensional search spaces. Furthermore, we demonstrate the applicability of BCA on real-world applications by solving several real-life engineering problems.
Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical ...algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.
This paper presents an investigation of a simple generic hyper-heuristic approach upon a set of widely used constructive heuristics (graph coloring heuristics) in timetabling. Within the ...hyper-heuristic framework, a tabu search approach is employed to search for permutations of graph heuristics which are used for constructing timetables in exam and course timetabling problems. This underpins a multi-stage hyper-heuristic where the tabu search employs permutations upon a different number of graph heuristics in two stages. We study this graph-based hyper-heuristic approach within the context of exploring fundamental issues concerning the search space of the hyper-heuristic (the heuristic space) and the solution space. Such issues have not been addressed in other hyper-heuristic research. These approaches are tested on both exam and course benchmark timetabling problems and are compared with the fine-tuned bespoke state-of-the-art approaches. The results are within the range of the best results reported in the literature. The approach described here represents a significantly more generally applicable approach than the current state of the art in the literature. Future work will extend this hyper-heuristic framework by employing methodologies which are applicable on a wider range of timetabling and scheduling problems.
Heuristics evaluation is frequently employed to evaluate usability. While general heuristics are suitable to evaluate most user interfaces, there is still a need to establish heuristics for specific ...domains to ensure that their specific usability issues are identified. This paper presents a comprehensive review of 70 studies related to usability heuristics for specific domains. The aim of this paper is to review the processes that were applied to establish heuristics in specific domains and identify gaps in order to provide recommendations for future research and area of improvements. The most urgent issue found is the deficiency of validation effort following heuristics proposition and the lack of robustness and rigour of validation method adopted. Whether domain specific heuristics perform better or worse than general ones is inconclusive due to lack of validation quality and clarity on how to assess the effectiveness of heuristics for specific domains. The lack of validation quality also affects effort in improving existing heuristics for specific domain as their weaknesses are not addressed.
•Analytical review of 70 studies of domain specific heuristics for usability evaluation.•There is a deficiency of validation effort following heuristics proposition.•It is inconclusive whether domain specific heuristics is better than general ones.•Less than 10% of the studies showed acceptable robustness and rigorousness.•More than 80% of the studies used similar heuristics as Nielsen's.
Registered Replication Report Bouwmeester, S.; Verkoeijen, P. P. J. L.; Aczel, B. ...
Perspectives on psychological science,
05/2017, Volume:
12, Issue:
3
Journal Article
Peer reviewed
Open access
In an anonymous 4-person economic game, participants contributed more money to a common project (i.e., cooperated) when required to decide quickly than when forced to delay their decision (Rand, ...Greene & Nowak, 2012), a pattern consistent with the social heuristics hypothesis proposed by Rand and colleagues. The results of studies using time pressure have been mixed, with some replication attempts observing similar patterns (e.g., Rand et al., 2014) and others observing null effects (e.g., Tinghög et al., 2013; Verkoeijen & Bouwmeester, 2014). This Registered Replication Report (RRR) assessed the size and variability of the effect of time pressure on cooperative decisions by combining 21 separate, preregistered replications of the critical conditions from Study 7 of the original article (Rand et al., 2012). The primary planned analysis used data from all participants who were randomly assigned to conditions and who met the protocol inclusion criteria (an intent-to-treat approach that included the 65.9% of participants in the time-pressure condition and 7.5% in the forced-delay condition who did not adhere to the time constraints), and we observed a difference in contributions of -0.37 percentage points compared with an 8.6 percentage point difference calculated from the original data. Analyzing the data as the original article did, including data only for participants who complied with the time constraints, the RRR observed a 10.37 percentage point difference in contributions compared with a 15.31 percentage point difference in the original study. In combination, the results of the intent-to-treat analysis and the compliant-only analysis are consistent with the presence of selection biases and the absence of a causal effect of time pressure on cooperation.