Chatbots on social networking sites are a recent innovation in computer-mediated marketing communication. In this study, 245 Facebook users between 18 and 35 years of age (Mage = 25.97, SD = 4.92) ...were asked to order tickets for the movies through Cinebot, a Facebook chatbot specifically built for the study. Afterwards, they were asked to evaluate their experiences via an online survey. The first purpose of this article was to investigate whether and how perceived helpfulness and usefulness of a chatbot consulted on the Facebook Messenger platform affected perceived intrusiveness of chatbot-initiated advertising in a later stage. In a second analysis, the relation between perceived intrusiveness and patronage intentions (i.e. purchase and recommendation intention of the product) was investigated. In addition, the role of message acceptance as a mediator and perceived message relevance as a moderator in this latter model were explored. As, to the best of our knowledge, our study is the first to investigate chatbot advertising, our research findings may hold important managerial implications.
•Chatbots' helpfulness and usefulness negatively affect intrusiveness of chatbot ads.•In turn, Facebook chatbot ads' perceived intrusiveness predict patronage intentions.•However, message acceptance and product involvement explain this relationship\.
Purpose
Artificial intelligence chatbots are shifting the nature of online services by revolutionizing the interactions of service providers with consumers. Thus, this study aims to explore the ...antecedents (e.g. compatibility, perceived ease of use, performance expectancy and social influence) and consequences (e.g. chatbot usage intention and customer engagement) of chatbot initial trust.
Design/methodology/approach
A sample of 184 responses was collected in Lebanon using a questionnaire and analyzed using structural equation modeling (SEM) by AMOS 24.
Findings
The results revealed that except for performance expectancy, all the other three factors (compatibility, perceived ease of use and social influence) significantly boost customers’ initial trust toward chatbots. Further, initial trust in chatbots enhances the intention to use chatbots and encourages customer engagement.
Research limitations/implications
The study provides insights into some variables influencing initial chatbot trust. Future studies could extend the model by adding other variables (e.g. customer experience and attitude), in addition to exploring the dark side of artificial intelligence chatbots.
Practical implications
This study suggests key insights for marketing managers on how to build chatbot initial trust, which, in turn, will lead to an increase in customers’ interactions with the brand.
Originality/value
The current study marks substantial contributions to the artificial intelligence marketing literature by proposing and testing a novel conceptual model that examines for the first time the factors that impact chatbot initial trust and the key outcomes of the latter.
•There seem to be a substantial variation and nuance the human-chatbot relationship formation process.•Self-disclosure remains important for human-chatbot relationship formation.•The chatbots ability ...to support deep-felt human needs, as well as providing variety in interactions seem to be drives pushing human-chatbot relationships forward.•Unpredictable events and technical difficulties can have a negative impact on human-chatbot relationship or lead to termination.
Social chatbots have become more advanced, paving the way for human–chatbot relationships (HCRs). Although this phenomenon has already received some research attention, the results have been contradictory, and there is uncertainty regarding how to understand HCR formation. To provide the needed knowledge on this phenomenon, we conducted a qualitative longitudinal study. We interviewed 25 participants over a 12-week period to understand how their HCRs formed with the popular chatbot Replika. We found that the HCRs formed gradually and mostly in line with the assumptions of Social Penetration Theory. Our findings indicate the need to acknowledge substantial variation and nuance in the HCR formation process, plus variation in the onset of self-disclosure and in the subsequent relationship formation. The results show that important drivers pushing the relationship toward attachment and perceived closeness appear to be Replika's ability to participate in a variety of interactions, as well as to support more deep-felt human needs related to social contact and self-reflection. In contrast, unpredictable events and technical difficulties could hinder relationship formation and lead to termination. Finally, we discuss the appropriateness of using a theoretical framework developed for human–human relationships when investigating HCRs, and we suggest directions for future research.
The present research focuses on the interplay between two common features of the customer service chatbot experience: gaze direction and anthropomorphism. Although the dominant approach in marketing ...theory and practice is to make chatbots as human‐like as possible, the current study, built on the humanness‐value‐loyalty model, addresses the chain of effects through which chatbots' nonverbal behaviors affect customers' willingness to disclose personal information and purchase intentions. By means of two experiments that adopt a real chatbot in a simulated shopping environment (i.e., car rental and travel insurance), the present work allows us to understand how to reduce individuals' tendency to see conversational agents as less knowledgeable and empathetic compared with humans. The results show that warmth perceptions are affected by gaze direction, whereas competence perceptions are affected by anthropomorphism. Warmth and competence perceptions are found to be key drivers of consumers’ skepticism toward the chatbot, which, in turn, affects consumers’ trust toward the service provider hosting the chatbot, ultimately leading consumers to be more willing to disclose their personal information and to repatronize the e‐tailer in the future. Building on the Theory of Mind, our results show that perceiving competence from a chatbot makes individuals less skeptical as long as they feel they are good at detecting others’ ultimate intentions.
Objective:
The aim of this review was to explore the current evidence for conversational agents or chatbots in the field of psychiatry and their role in screening, diagnosis, and treatment of mental ...illnesses.
Methods:
A systematic literature search in June 2018 was conducted in PubMed, EmBase, PsycINFO, Cochrane, Web of Science, and IEEE Xplore. Studies were included that involved a chatbot in a mental health setting focusing on populations with or at high risk of developing depression, anxiety, schizophrenia, bipolar, and substance abuse disorders.
Results:
From the selected databases, 1466 records were retrieved and 8 studies met the inclusion criteria. Two additional studies were included from reference list screening for a total of 10 included studies. Overall, potential for conversational agents in psychiatric use was reported to be high across all studies. In particular, conversational agents showed potential for benefit in psychoeducation and self-adherence. In addition, satisfaction rating of chatbots was high across all studies, suggesting that they would be an effective and enjoyable tool in psychiatric treatment.
Conclusion:
Preliminary evidence for psychiatric use of chatbots is favourable. However, given the heterogeneity of the reviewed studies, further research with standardized outcomes reporting is required to more thoroughly examine the effectiveness of conversational agents. Regardless, early evidence shows that with the proper approach and research, the mental health field could use conversational agents in psychiatric treatment.
Advances in artificial intelligence strengthen chatbots’ ability to resemble human conversational agents. For some application areas, it may be tempting not to be transparent regarding a ...conversational agent’s nature as chatbot or human. However, the uncanny valley theory suggests that such lack in transparency may cause uneasy feelings in the user. In this study, we combined quantitative and qualitative methods to investigate this issue. First, we used a 2 x 2 experimental research design (n = 28) to investigate effects of lack in transparency on the perceived pleasantness of the conversation in addition to perceived human likeness and affinity for the conversational agent. Second, we conducted an exploratory analysis of qualitative participant reports on these conversations. We did not find that a lack in transparency negatively affected user experience, but we identified three factors important to participants’ assessments. The findings are of theoretical and practical significance and motivate future research.
This study explores the trends of chatbots in education studies by conducting a literature review to analyze relevant papers published in the Social Science Citation Index (SSCI) journals by ...searching the Web of Science (WoS) database. From the analysis results, it was found that the United States, Taiwan and Hong Kong are the top three contributing countries or regions. In addition, most studies adopted quantitative methods in their research design, such as ANOVA (Analysis of variance), descriptive statistics, t test, and correlation analysis. ANCOVA (Analysis of covariance) was the most frequently adopted approach for comparing the performances or perceptions of different groups of students. From the analysis results, the greatest proportion of studies adopted guided learning, followed by no learning activities. It was determined that the studies related to chatbots in education are still in an early stage since there are few empirical studies investigating the use of effective learning designs or learning strategies with chatbots. This implies much room for conducting relevant research to drive innovative teaching in terms of improving the learning process and learning outcomes. Finally, we highlight the research gaps and suggest several directions for future research based on the findings in the present study.
The article concerns the users’ experiences of interacting with well-being chatbots. The text shows how chatbots can act as virtual companions and, to some extent, therapists for people in their ...daily reality. It also reflects on why individuals choose such a form of support for their well-being, concerning, among others, the stigmatization aspect of mental health problems. The article discusses and compares various dimensions of users’ interactions with three popular chatbots: Wysa, Woebot, and Replika. The text both refers to the results of research on the well-being chatbots and, analytically, engages in a dialogue with the results discussed in the form of sociological (and philosophical) reflection. The issues taken up in the paper include an in-depth reflection on the aspects of the relationship between humans and chatbots that allow users to establish an emotional bond with their virtual companions. In addition, the consideration addresses the issue of a user’s sense of alienation when interacting with a virtual companion, as well as the problem of anxieties and dilemmas people may experience therein. In the context of alienation, the article also attempts to conceptualize that theme concerning available conceptual resources.
Chatbots are envisioned to dramatically change the future of Software Engineering, allowing practitioners to chat and inquire about their software projects and interact with different services using ...natural language. At the heart of every chatbot is a Natural Language Understanding (NLU) component that enables the chatbot to understand natural language input. Recently, many NLU platforms were provided to serve as an off-the-shelf NLU component for chatbots, however, selecting the best NLU for Software Engineering chatbots remains an open challenge. Therefore, in this paper, we evaluate four of the most commonly used NLUs, namely IBM Watson, Google Dialogflow, Rasa, and Microsoft LUIS to shed light on which NLU should be used in Software Engineering based chatbots. Specifically, we examine the NLUs' performance in classifying intents, confidence scores stability, and extracting entities. To evaluate the NLUs, we use two datasets that reflect two common tasks performed by Software Engineering practitioners, 1) the task of chatting with the chatbot to ask questions about software repositories 2) the task of asking development questions on Q&A forums (e.g., Stack Overflow). According to our findings, IBM Watson is the best performing NLU when considering the three aspects (intents classification, confidence scores, and entity extraction). However, the results from each individual aspect show that, in intents classification, IBM Watson performs the best with an F1-measure<inline-formula><tex-math notation="LaTeX">></tex-math> <mml:math><mml:mo>></mml:mo></mml:math><inline-graphic xlink:href="abdellatif-ieq1-3078384.gif"/> </inline-formula>84%, but in confidence scores, Rasa comes on top with a median confidence score higher than 0.91. Our results also show that all NLUs, except for Dialogflow, generally provide trustable confidence scores. For entity extraction, Microsoft LUIS and IBM Watson outperform other NLUs in the two SE tasks. Our results provide guidance to software engineering practitioners when deciding which NLU to use in their chatbots.