We asked ChatGPT to create a didactic scenario for language teaching in a museum. This is the answer. Note that if you add prompts such as “A2”, “lessons for 8 year-old children”, “taskbased”, “focus ...on interaction”, “scavenger hunt”, then you will get more precise activities!
“Chat GPT” (Generative Pre-trained Transformer) is an Artificial Intelligence (AI) based conversational LLM (Large Language Model), launched in November 2022, developed by Open AI (Open AI, LLC, San ...Francisco, CA, USA). Chat GPT etymology is related to being a CHATBOT (a program able to understand and generate responses using a text-based interface) based on the generative pre-trained transformer (GPT) architecture.1 The AI chatbots and Chat GPT are sophisticated and can respond to multiple languages.2 Globally, ChatGPT is the fastest-growing application used in internet history, with nearly 100 million users as of January 2023, and currently has roughly 1.8 billion website visitors per month.3 ChatGPT can improve the healthcare system and enhance healthcare outcomes by assisting with clinical decision support and relevant clinical guidelines.4,5 ChatGPT can be valuable for streamlining the workflow and refining personalized medicine in healthcare practice.6 It can play an essential role in medical education by providing updates on new developments in different medical fields and a tool of assessment to assess the clinical skills of medical students.7 AI chatbot can be a search engine that helps write research papers. It can be used as an intermediary in a cognition session, assisting in topic selection and reducing the time the authors spend searching for articles.8 Identifying the boundaries of ChatGPT and their significant limitations, many challenges arise for research purposes.9 Inaccuracies, transparency, and biases are issues that need to be addressed when using AI-generated text. The unethical utilization of AI technology may extend to fabricating images, which constitutes scientific misconduct.10 An issue raised by the international press recently, ChatGPT has been listed as a co-author on several papers already. Should co-authorship be assigned to ChatGPT if it drafts large parts of the research paper?11 “WAME Recommendations on ChatGPT and Chatbots with Scholarly Publications” revised and followed the proliferation of chatbots and their expanding use in scholarly publishing and emerging concerns regarding the lack of authenticity of content when using chatbots. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work and to address the need for all journal editors to have access to manuscript screening tools. According to The WAME Principal recommendation, only humans can be authors; Chatbots cannot serve as authors.12 Human authors should take full responsibility for academic work and use ChatGPT applications within acceptable standards with transparent disclosure.13 These recommendations emphasize the importance of manuscript screening tools to detect AI, guide editors on using chatbots in papers published in their journals, and assist authors and reviewers in properly attributing chatbots to their work. Embracing the potential of ChatGPT while remaining vigilant against potential pitfalls, we must collectively ensure that the synergy between human ingenuity and AI contributes positively to advancing health research. With an unwavering commitment to ethical practices and transparent communication, we embark on a journey where technology and academia converge, fostering a new era of scholarly excellence.
This is a correspondence on "Evaluation of ChatGPT and Gemini large language models for pharmacometrics with NONMEM". Additional concern on using ChatGPT and Gemini is provided.
AI-tools such as ChatGPT can assist researchers to improve the performance of the research process. This paper examines whether researchers could apply ChatGPT to develop and empirically validate new ...research scales. The study describes a process how to prompt ChatGPT to assist the scale development of a new construct, using the example of the construct of perceived value of ChatGPT-supported consumer behavior. The paper reports four main empirical studies (US: N = 148; Australia: N = 317; UK: N = 108; Germany: N = 51) that have been employed to validate the newly developed scale. The first study purifies the scale. The following studies confirm the adjusted factorial validity of the reduced scale. Although the empirical data imply a simplification of the initial multi-dimensional scale, the final three-dimensional operationalization is highly reliable and valid. The paper outlines the shortcomings and several critical notes to stimulate more research and discussion in this area.
•This paper examines whether researchers could apply ChatGPT to develop and empirically validate new research scales.•The study describes a process how to prompt ChatGPT to assist the scale development of a new construct.•Four empirical studies establish the prognostic validity and the construct validity of the newly developed scale.•The paper outlines the shortcomings and several critical notes to stimulate more research and discussion in this area.
Artificial Intelligence (AI) systems such as ChatGPT can take medical examinations and counsel patients regarding medical diagnosis. We aim to quantify the accuracy of the ChatGPT V3.4 in answering ...commonly asked questions pertaining to genetic testing and counseling for gynecologic cancers.
Forty questions were formulated in conjunction with gynecologic oncologists and adapted from professional society guidelines and ChatGPT version 3.5 was queried, the version that is readily available to the public. The two categories of questions were genetic counseling guidelines and questions pertaining to specific genetic disorders. The answers were scored by two attending Gynecologic Oncologists according to the following scale: 1) correct and comprehensive, 2) correct but not comprehensive, 3) some correct, some incorrect, and 4) completely incorrect. Scoring discrepancies were resolved by additional third reviewer. The proportion of responses earning each score were calculated overall and within each question category.
ChatGPT provided correct and comprehensive answers to 33/40 (82.5%) questions, correct but not comprehensive answers to 6/40 (15%) questions, partially incorrect answers to 1/40 (2.5%) questions, and completely incorrect answers to 0/40 (0%) questions. The genetic counseling category of questions had the highest proportion of answers that were both correct and comprehensive with ChatGPT answering all 20/20 questions with 100% accuracy and were comprehensive in responses. ChatGPT performed equally in the specific genetic disorders category, with 88.2% (15/17) and 66.6% (2/3) correct and comprehensive answers to questions pertaining to hereditary breast and ovarian cancer and Lynch syndrome questions respectively.
ChatGPT accurately answers questions about genetic syndromes, genetic testing, and counseling in majority of the studied questions. These data suggest this powerful tool can be utilized as a patient resource for genetic counseling questions, though more data input from gynecologic oncologists would be needed to educate patients on genetic syndromes.
•ChatGPT provides comprehensive and correct responses to questions regarding genetic testing and counseling as it pertains to gynecologic oncology.•ChatGPT can accurately provide genetic counseling, but further studies are necessary before it can be recommended as a patient resource.