Abstract
In this work, we propose to discuss the perceptions of physics teachers in Brazil about AI generative like ChatGPT. Data were collected by an online Focus Group (FG) held during three ...meetings of one and a half hours each, with six Brazilian physics teachers with varied experience and backgrounds. Participants’ discourse was analysed according to three different questions: (a) the players involved in using ChatGPT in physics classes, (b) the attitudes towards the introduction of ChatGPT in physics classes, and (c) the main functionalities of ChatGPT in physics classes. Our results indicate that physics teachers’ perceptions of GPT, in general, involves more the role of students than the role of the teacher, correspond to more positive than negative perception, and allows identifying four main functionalities defined as a co-pilot of lessons, as an educational bureaucracy manager, as a simple problem-solving tool, and as a literal information providing tool.
The introduction of Artificial Intelligence technology enables the integration of Chatbot systems into various aspects of education. This technology is increasingly being used for educational ...purposes. Chatbot technology has the potential to provide quick and personalised services to everyone in the sector, including institutional employees and students. This paper presents a systematic review of previous studies on the use of Chatbots in education. A systematic review approach was used to analyse 53 articles from recognised digital databases. The review results provide a comprehensive understanding of prior research related to the use of Chatbots in education, including information on existing studies, benefits, and challenges, as well as future research areas on the implementation of Chatbot technology in the field of education. The implications of the findings were discussed, and suggestions were made.
The use of chatbots as an online survey tool is becoming increasingly popular owing to their convenience, particularly when face-to-face interactions are difficult. However, with longer surveys, ...interaction experience and data quality can decrease due to several factors, such as increased fatigue. In this study, we compared how applying humanization techniques to survey chatbots can affect survey-taking experience in three aspects: respondents' perceptions of chatbots, interaction experience, and data quality. To address our research goal, two different versions of survey chatbots were compared: a humanization applied survey chatbot (HASbot) and a baseline chatbot (baselinebot). The HASbot simultaneously incorporates four humanization techniques: use of self-introduction, addressing by name, using adaptive response speed, and echoing respondents' answers. Our experimental study with 59 middle school-aged adolescents showed that compared to the baselinebot, respondents’ perceptions of the HASbot were more positive, with higher levels of anthropomorphism and social presence. In terms of interaction experience, the respondents spent more time interacting with the HASbot and showed a higher level of satisfaction. For data quality, the HASbot outperformed the baselinebot in terms of self-disclosure; however, the HASbot also elicited a higher social desirability bias. No difference was observed in the response differentiation between the two chatbots.
Display omitted
•We compared humanization applied survey chatbot (HASbot) to baseline chatbot.•HASbot increases anthropomorphism and social presence perceptions.•HASbot elicits higher satisfaction & longer interaction time.•HASbot yields higher level of self-disclosure.•HASbot has a drawback of causing social desirability bias.
Abstract
ChatGPT and similar generative AI models have attracted hundreds of millions of users and have become part of the public discourse. Many believe that such models will disrupt society and ...lead to significant changes in the education system and information generation. So far, this belief is based on either colloquial evidence or benchmarks from the owners of the models—both lack scientific rigor. We systematically assess the quality of AI-generated content through a large-scale study comparing human-written versus ChatGPT-generated argumentative student essays. We use essays that were rated by a large number of human experts (teachers). We augment the analysis by considering a set of linguistic characteristics of the generated essays. Our results demonstrate that ChatGPT generates essays that are rated higher regarding quality than human-written essays. The writing style of the AI models exhibits linguistic characteristics that are different from those of the human-written essays. Since the technology is readily available, we believe that educators must act immediately. We must re-invent homework and develop teaching concepts that utilize these AI models in the same way as math utilizes the calculator: teach the general concepts first and then use AI tools to free up time for other learning objectives.
Objective
A significant gap exists between those who need and those who receive care for eating disorders (EDs). Novel solutions are needed to encourage service use and address treatment barriers. ...This study developed and evaluated the usability of a chatbot designed for pairing with online ED screening. The tool aimed to promote mental health service utilization by improving motivation for treatment and self‐efficacy among individuals with EDs.
Methods
A chatbot prototype, Alex, was designed using decision trees and theoretically‐informed components: psychoeducation, motivational interviewing, personalized recommendations, and repeated administration. Usability testing was conducted over four iterative cycles, with user feedback informing refinements to the next iteration. Post‐testing, participants (N= 21) completed the System Usability Scale (SUS), the Usefulness, Satisfaction, and Ease of Use Questionnaire (USE), and a semi‐structured interview.
Results
Interview feedback detailed chatbot aspects participants enjoyed and aspects necessitating improvement. Feedback converged on four themes: user experience, chatbot qualities, chatbot content, and ease of use. Following refinements, users described Alex as humanlike, supportive, and encouraging. Content was perceived as novel and personally relevant. USE scores across domains were generally above average (~5 out of 7), and SUS scores indicated “good” to “excellent” usability across cycles, with the final iteration receiving the highest average score.
Discussion
Overall, participants generally reflected positively on interactions with Alex, including the initial version. Refinements between cycles further improved user experiences. This study provides preliminary evidence of the feasibility and acceptance of a chatbot designed to promote motivation for and use of services among individuals with EDs.
Public Significance
Low rates of service utilization and treatment have been observed among individuals following online eating disorder screening. Tools are needed, including scalable, digital options, that can be easily paired with screening, to improve motivation for addressing eating disorders and promote service utilization.
El presente estudio se enfoca en analizar las reglas textuales de las respuestas generadas por ChatGPT y la percepción de los estudiantes universitarios hacia esta herramienta. La investigación, de ...tipo descriptivo de campo, empleó métodos analítico-sintéticos y empíricos. Se encuestó a 60 estudiantes de la Universidad UPO1 mediante técnicas no probabilísticas, empleando registros de actividades y entrevistas abiertas como instrumentos de recolección de datos. El análisis cualitativo categorial reveló que las respuestas del ChatGPT cumplen con requisitos sintácticos, semánticos y pragmáticos, presentando un carácter expositivo. A pesar de la valoración positiva de los estudiantes, estos tienden a utilizar la herramienta de manera superficial, priorizando la inmediatez en la resolución de tareas sin profundización suficiente, con un utilitario de bajo nivel.
PurposeChatbots are increasingly prevalent in the service frontline. Due to advancements in artificial intelligence, chatbots are often indistinguishable from humans. Regarding the question whether ...firms should disclose their chatbots' nonhuman identity or not, previous studies find negative consumer reactions to chatbot disclosure. By considering the role of trust and service-related context factors, this study explores how negative effects of chatbot disclosure for customer retention can be prevented.Design/methodology/approachThis paper presents two experimental studies that examine the effect of disclosing the nonhuman identity of chatbots on customer retention. While the first study examines the effect of chatbot disclosure for different levels of service criticality, the second study considers different service outcomes. The authors employ analysis of covariance and mediation analysis to test their hypotheses.FindingsChatbot disclosure has a negative indirect effect on customer retention through mitigated trust for services with high criticality. In cases where a chatbot fails to handle the customer's service issue, disclosing the chatbot identity not only lacks negative impact but even elicits a positive effect on retention.Originality/valueThe authors provide evidence that customers will react differently to chatbot disclosure depending on the service frontline setting. They show that chatbot disclosure does not only have undesirable consequences as previous studies suspect but can lead to positive reactions as well. By doing so, the authors draw a more balanced picture on the consequences of chatbot disclosure.
As generative language models, exemplified by ChatGPT, continue to advance in their capabilities, the spotlight on biases inherent in these models intensifies. This paper delves into the distinctive ...challenges and risks associated with biases specifically in large-scale language models. We explore the origins of biases, stemming from factors such as training data, model specifications, algorithmic constraints, product design, and policy decisions. Our examination extends to the ethical implications arising from the unintended consequences of biased model outputs. In addition, we analyze the intricacies of mitigating biases, acknowledging the inevitable persistence of some biases, and consider the consequences of deploying these models across diverse applications, including virtual assistants, content generation, and chatbots. Finally, we provide an overview of current approaches for identifying, quantifying, and mitigating biases in language models, underscoring the need for a collaborative, multidisciplinary effort to craft AI systems that embody equity, transparency, and responsibility. This article aims to catalyze a thoughtful discourse within the AI community, prompting researchers and developers to consider the unique role of biases in the domain of generative language models and the ongoing quest for ethical AI.