Rapid development and adoption of AI, machine learning, and natural language processing applications challenge managers and policy makers to harness these transformative technologies. In this ...context, the authors provide evidence of a novel “word-of-machine” effect, the phenomenon by which utilitarian/hedonic attribute trade-offs determine preference for, or resistance to, AI-based recommendations compared with traditional word of mouth, or human-based recommendations. The word-of-machine effect stems from a lay belief that AI recommenders are more competent than human recommenders in the utilitarian realm and less competent than human recommenders in the hedonic realm. As a consequence, importance or salience of utilitarian attributes determine preference for AI recommenders over human ones, and importance or salience of hedonic attributes determine resistance to AI recommenders over human ones (Studies 1–4). The word-of machine effect is robust to attribute complexity, number of options considered, and transaction costs. The word-of-machine effect reverses for utilitarian goals if a recommendation needs matching to a person’s unique preferences (Study 5) and is eliminated in the case of human–AI hybrid decision making (i.e., augmented rather than artificial intelligence; Study 6). An intervention based on the consider-the-opposite protocol attenuates the word-of-machine effect (Studies 7a–b).
Resistance to Medical Artificial Intelligence Longoni, Chiara; Bonezzi, Andrea; Morewedge, Carey K
The Journal of consumer research,
12/2019, Letnik:
46, Številka:
4
Journal Article
Recenzirano
Odprti dostop
Abstract
Artificial intelligence (AI) is revolutionizing healthcare, but little is known about consumer receptivity to AI in medicine. Consumers are reluctant to utilize healthcare provided by AI in ...real and hypothetical choices, separate and joint evaluations. Consumers are less likely to utilize healthcare (study 1), exhibit lower reservation prices for healthcare (study 2), are less sensitive to differences in provider performance (studies 3A–3C), and derive negative utility if a provider is automated rather than human (study 4). Uniqueness neglect, a concern that AI providers are less able than human providers to account for consumers’ unique characteristics and circumstances, drives consumer resistance to medical AI. Indeed, resistance to medical AI is stronger for consumers who perceive themselves to be more unique (study 5). Uniqueness neglect mediates resistance to medical AI (study 6), and is eliminated when AI provides care (a) that is framed as personalized (study 7), (b) to consumers other than the self (study 8), or (c) that only supports, rather than replaces, a decision made by a human healthcare provider (study 9). These findings make contributions to the psychology of automation and medical decision making, and suggest interventions to increase consumer acceptance of AI in medicine.
Medical artificial intelligence is cost-effective and scalable and often outperforms human providers, yet people are reluctant to use it. We show that resistance to the utilization of medical ...artificial intelligence is driven by both the subjective difficulty of understanding algorithms (the perception that they are a 'black box') and by an illusory subjective understanding of human medical decision-making. In five pre-registered experiments (1-3B: N = 2,699), we find that people exhibit an illusory understanding of human medical decision-making (study 1). This leads people to believe they better understand decisions made by human than algorithmic healthcare providers (studies 2A,B), which makes them more reluctant to utilize algorithmic than human providers (studies 3A,B). Fortunately, brief interventions that increase subjective understanding of algorithmic decision processes increase willingness to utilize algorithmic healthcare providers (studies 3A,B). A sixth study on Google Ads for an algorithmic skin cancer detection app finds that the effectiveness of such interventions generalizes to field settings (study 4: N = 14,013).
Artificial intelligence (AI) is pervading the government and transforming how public services are provided to consumers across policy areas spanning allocation of government benefits, law ...enforcement, risk monitoring, and the provision of services. Despite technological improvements, AI systems are fallible and may err. How do consumers respond when learning of AI failures? In 13 preregistered studies (N = 3,724) across a range of policy areas, the authors show that algorithmic failures are generalized more broadly than human failures. This effect is termed “algorithmic transference” as it is an inferential process that generalizes (i.e., transfers) information about one member of a group to another member of that same group. Rather than reflecting generalized algorithm aversion, algorithmic transference is rooted in social categorization: it stems from how people perceive a group of AI systems versus a group of humans. Because AI systems are perceived as more homogeneous than people, failure information about one AI algorithm is transferred to another algorithm to a greater extent than failure information about a person is transferred to another person. Capturing AI's impact on consumers and societies, these results show how the premature or mismanaged deployment of faulty AI technologies may undermine the very institutions that AI systems are meant to modernize.
Ads promising a desired change are ubiquitous in the marketplace. These ads typically include visuals of the starting and ending point of the promised change ("before/after" ads). "Progression" ads, ...which include intermediate steps in addition to starting and ending points, are much rarer in the marketplace. Across several consumer domains, the authors show an ad-type effect: progression ads foster spontaneous simulation of the process through which the change will happen, which makes these ads more credible and, in turn, more persuasive than before/after ads (Studies 1–3). The authors also show that impairing process simulation and high skepticism moderate the ad-type effect (Studies 4–5). Finally, they show effect reversals: if consumers focus on achieving the desired results quickly, and it is possible to do so, progression ads and the associated process simulation backfire in terms of credibility and persuasion (Studies 6–7). These findings contribute to existing research by identifying conditions under which progression ads have beneficial or disadvantageous effects. These findings have managerial implications because they run counter to current marketing practices, which favor before/after over progression ads.
Abstract
In Longoni et al. (2019), we examine how algorithm aversion influences utilization of healthcare delivered by human and artificial intelligence providers. Pezzo and Beckstead’s (2020) ...commentary asks whether resistance to medical AI takes the form of a noncompensatory decision strategy, in which a single attribute determines provider choice, or whether resistance to medical AI is one of several attributes considered in a compensatory decision strategy. We clarify that our paper both claims and finds that, all else equal, resistance to medical AI is one of several attributes (e.g., cost and performance) influencing healthcare utilization decisions. In other words, resistance to medical AI is a consequential input to compensatory decisions regarding healthcare utilization and provider choice decisions, not a noncompensatory decision strategy. People do not always reject healthcare provided by AI, and our article makes no claim that they do.
Celotno besedilo
Dostopno za:
CEKLJ, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
In Longoni et al. (2019), we examine how algorithm aversion influences utilization of healthcare delivered by human and artificial intelligence providers. Pezzo and Beckstead’s (2020) commentary asks ...whether resistance to medical AI takes the form of a noncompensatory decision strategy, in which a single attribute determines provider choice, or whether resistance to medical AI is one of several attributes considered in a compensatory decision strategy. We clarify that our paper both claims and finds that, all else equal, resistance to medical AI is one of several attributes (e.g., cost and performance) influencing healthcare utilization decisions. In other words, resistance to medical AI is a consequential input to compensatory decisions regarding healthcare utilization and provider choice decisions, not a noncompensatory decision strategy. People do not always reject healthcare provided by AI, and our article makes no claim that they do.
Celotno besedilo
Dostopno za:
CEKLJ, DOBA, IZUM, KILJ, NUK, PILJ, PNG, SAZU, UILJ, UKNU, UL, UM, UPUK
Across a range of decision contexts, we provide evidence of a novel proximity bias in probability judgments, whereby spatial distance and outcome valence systematically interact in determining ...probability judgments. Six hypothetical and incentive‐compatible experiments (combined N = 4007) show that a positive outcome is estimated as more likely to occur when near than distant, whereas a negative outcome is estimated as less likely to occur when near than distant (studies 1–6). The proximity bias is explained by wishful thinking and thus perceptions of outcome desirability (study 3), and it does not manifest when an outcome is less relevant for the self, such as the case of outcomes with little consequence for the self (studies 4 and 5) or when estimating outcomes for others who are irrelevant to the self (study 6). Overall, the proximity bias we document deepens our understanding of the antecedents of probability judgments.
Does validating the purchase of green products hamper subsequent green behaviors in people committed to the identity goal of being green? Positive feedback on purchasing green products led to less ...recycling compared to negative feedback, with no feedback participants lying in between (Study 1). Assuming that receiving positive feedback on buying green products results in a state of goal completeness, we hypothesized and observed that constructs (e.g., earth) related to being green were the least accessible in positive feedback participants as compared to no feedback and (even more so) to negative feedback participants (Study 2). This pattern of results also emerged with respect to the perception of the color green (i.e., a green patch was perceived the least green by positive feedback participants; Study 3). These findings suggest that being praised for buying green creates a state of goal completeness that hampers subsequent striving for the aspired-to identity goal.
•We demonstrate ironic behavioral effects of validating green choices.•Positive feedback on green choices leads to behaving less green (recycling less).•We show that differential states of goal completeness account for this effect.
Marketers are adopting increasingly sophisticated ways to engage with customers throughout their journeys. We extend prior perspectives on the customer journey by introducing the role of digital ...signals that consumers emit throughout their activities. We argue that the ability to detect and act on consumer digital signals is a source of competitive advantage for firms. Technology enables firms to collect, interpret, and act on these signals to better manage the customer journey. While some consumers’ desire for privacy can restrict the opportunities technology provides marketers, other consumers’ desire for personalization can encourage the use of technology to inform marketing efforts. We posit that this difference in consumers’ willingness to emit observable signals may hinge on the strength of their relationship with the firm. We next discuss factors that may shift consumer preferences and consequently affect the technology-enabled opportunities available to firms. We conclude with a research agenda that focuses on consumers, firms, and regulators.