The current study aims to establish a connection between students' behavioral concerns, namely stress and anxiety, related to the completion of academic tasks, and their integration of technology ...using the Technology Acceptance Model (TAM) through the utilization of Chat-GPT via ubiquitous learning (UL) procedure. To achieve this objective, data was collected from 156 students studying management science who were engaged in their final year research projects or internship reports from selected universities in Pakistan. The gathered data underwent analysis through Structural Equation Modeling (SEM) using Smart PLS software. The findings reveal a significant relationship: students' stress contributes to the emergence of anxiety, which in turn motivates the adoption of technology-assisted solutions, specifically Chat-GPT, to efficiently complete assigned tasks within deadlines working through any device from anywhere. Consequently, the perceived ease of use and usefulness associated with Chat-GPT's AI-generated text contribute to shaping students' favorable attitudes toward utilizing Chat-GPT and also play a role in reducing their stress levels. Furthermore, the study confirms that the development of a positive attitude in students acts as a driving force, compelling them to engage with Chat-GPT through ubiquitous learning (UL) procedure, ultimately resulting in increased actual usage of Chat-GPT. This pattern, in turn, contributes to stress and anxiety reduction among management science students. The study's outcomes corroborate the TAM model, which aligns with the social exchange process, demonstrating its applicability within the context of the educational setup in management sciences and its potential to enhance the learning experiences of researchers.
•The study applies the Technology Acceptance Model to understand chat-GPT use among students of Universities of Pakistan.•Psychological factors (stress & Anxiety) and technology use is investigated among university students.•Behavioral Factors, Ubiquitous learning, Chat GPT and TAM are aligned to understand human computer interaction.•Based on detail analysis novel approach related to Technology and Social Acceptance Model (TSAM) is proposed.
The non-ferrous metal industry is encountering several challenges, including production efficiency, manufacturing information fragmentation, and human health problems, which highlights the importance ...of implementing autonomous intelligent manufacturing systems (AIMS). Recently, the foundation model like GPT-4, has garnered attentions due to its exceptional capabilities and proficiency in diverse domains and tasks, facilitating the realization of AIMS. However, the existing foundation models can only address basic general-purpose tasks and are difficult to use for industrial applications. In this paper, we propose a data and knowledge driven AIMS with industrial-generative pretrained Transformer (Industrial-GPT) for intelligent factories. The paradigms and architecture of autonomous intelligent factories are firstly defined. Then, we explore the mechanism with knowledge graph, digital twin, and Industrial-GPT, including multi-level autonomous perception, cross layer and domain cognition, and event-driven collaborative decision-making. Finally, the detailed case study is based on the cooperation with a zinc smelting intelligent factory to achieve networked collaborative manufacturing, and explores the theory and realization mechanism of AIMS on a small scale. We explore the experimental analyses, evaluation mechanisms and platform applications of AIMS at the workshop level. We believe this will help to realize larger scale AIMS in the future.
•We firstly define the autonomous intelligent manufacturing system (AIMS), and propose the architecture and basic paradigms of the AIMS. This paper contributes to the development of AIMS in the nonferrous industry.•The model as a service (MaaS) supports the collaborative empowerment of small and foundation models in vertical industries. Based on this, we introduce Industrial-GPT into specific manufacturing scenarios.•The detailed case study is based on the cooperation with a zinc smelting intelligent factory to achieve networked collaborative manufacturing, and explores the theory and realization mechanism of AIMS on a small scale.
GPT for medical entity recognition in Spanish García-Barragán, Álvaro; González Calatayud, Alberto; Solarte-Pabón, Oswaldo ...
Multimedia tools and applications,
04/2024
Journal Article
Recenzirano
Odprti dostop
Abstract In recent years, there has been a remarkable surge in the development of Natural Language Processing (NLP) models, particularly in the realm of Named Entity Recognition (NER). Models such as ...BERT have demonstrated exceptional performance, leveraging annotated corpora for accurate entity identification. However, the question arises: Can newer Large Language Models (LLMs) like GPT be utilized without the need for extensive annotation, thereby enabling direct entity extraction? In this study, we explore this issue, comparing the efficacy of fine-tuning techniques with prompting methods to elucidate the potential of GPT in the identification of medical entities within Spanish electronic health records (EHR). This study utilized a dataset of Spanish EHRs related to breast cancer and implemented both a traditional NER method using BERT, and a contemporary approach that combines few shot learning and integration of external knowledge, driven by LLMs using GPT, to structure the data. The analysis involved a comprehensive pipeline that included these methods. Key performance metrics, such as precision, recall, and F-score, were used to evaluate the effectiveness of each method. This comparative approach aimed to highlight the strengths and limitations of each method in the context of structuring Spanish EHRs efficiently and accurately.The comparative analysis undertaken in this article demonstrates that both the traditional BERT-based NER method and the few-shot LLM-driven approach, augmented with external knowledge, provide comparable levels of precision in metrics such as precision, recall, and F score when applied to Spanish EHR. Contrary to expectations, the LLM-driven approach, which necessitates minimal data annotation, performs on par with BERT’s capability to discern complex medical terminologies and contextual nuances within the EHRs. The results of this study highlight a notable advance in the field of NER for Spanish EHRs, with the few shot approach driven by LLM, enhanced by external knowledge, slightly edging out the traditional BERT-based method in overall effectiveness. GPT’s superiority in F-score and its minimal reliance on extensive data annotation underscore its potential in medical data processing.
The rapid growth in computational power, sensor technology, and wearable devices has provided a solid foundation for all aspects of cardiac arrhythmia care. Artificial intelligence (AI) has been ...instrumental in bringing about significant changes in the prevention, risk assessment, diagnosis, and treatment of arrhythmia. This review examines the current state of AI in the diagnosis and treatment of atrial fibrillation, supraventricular arrhythmia, ventricular arrhythmia, hereditary channelopathies, and cardiac pacing. Furthermore, ChatGPT, which has gained attention recently, is addressed in this paper along with its potential applications in the field of arrhythmia. Additionally, the accuracy of arrhythmia diagnosis can be improved by identifying electrode misplacement or erroneous swapping of electrode position using AI. Remote monitoring has expanded greatly due to the emergence of contactless monitoring technology as wearable devices continue to develop and flourish. Parallel advances in AI computing power, ChatGPT, availability of large data sets, and more have greatly expanded applications in arrhythmia diagnosis, risk assessment, and treatment. More precise algorithms based on big data, personalized risk assessment, telemedicine and mobile health, smart hardware and wearables, and the exploration of rare or complex types of arrhythmia are the future direction.
Integrated pest management is essential for controlling plant diseases that reduce crop yields. Rapid diagnosis is crucial for effective management in the event of an outbreak to identify the cause ...and minimize damage. Diagnosis methods range from indirect visual observation, which can be subjective and inaccurate, to machine learning and deep learning predictions that may suffer from biased data. Direct molecular-based methods, while accurate, are complex and time-consuming. However, the development of large multimodal models, like GPT-4, combines image recognition with natural language processing for more accurate diagnostic information. This study introduces GPT-4-based system for diagnosing plant diseases utilizing a detailed knowledge base with 1,420 host plants, 2,462 pathogens, and 37,467 pesticide instances from the official plant disease and pesticide registries of Korea. The AI plant doctor offers interactive advice on diagnosis, control methods, and pesticide use for diseases in Korea and is accessible at https://pdoc.scnu.ac.kr/.
Large‐scale artificial intelligence (AI) models such as ChatGPT have the potential to improve performance on many benchmarks and real‐world tasks. However, it is difficult to develop and maintain ...these models because of their complexity and resource requirements. As a result, they are still inaccessible to healthcare industries and clinicians. This situation might soon be changed because of advancements in graphics processing unit (GPU) programming and parallel computing. More importantly, leveraging existing large‐scale AIs such as GPT‐4 and Med‐PaLM and integrating them into multiagent models (e.g., Visual‐ChatGPT) will facilitate real‐world implementations. This review aims to raise awareness of the potential applications of these models in healthcare. We provide a general overview of several advanced large‐scale AI models, including language models, vision‐language models, graph learning models, language‐conditioned multiagent models, and multimodal embodied models. We discuss their potential medical applications in addition to the challenges and future directions. Importantly, we stress the need to align these models with human values and goals, such as using reinforcement learning from human feedback, to ensure that they provide accurate and personalized insights that support human decision‐making and improve healthcare outcomes.
This review provides an overview of large‐scale AI models, including language models (e.g., ChatGPT), vision‐language models, and language‐conditioned multiagent models, and discusses their potential applications in medicine, as well as their limitations and future trends. We also propose how large‐scale AI models can be integrated into various scenarios of clinical applications.
La Inteligencia Artificial generativa está ocupando todas las esferas de nuestra vida cotidiana: laboral, económica, cultural, educativa, política y la clave está en aprovechar sus potencialidades a ...nuestro favor. Se trata de un fenómeno que supone nuevos y complejos desafíos para la educación superior, especialmente el rol docente si tenemos en cuenta la formación de los profesionales del futuro. El chatbot de la empresa Open AI, mundialmente conocido como Chat GPT, aún cuenta con el récord de 3.5 millones de personas usuarias en su primer día de “vida”. Tras cumplirse un año de su salida a la luz, las y los jóvenes que transitan diariamente la Universidad lo están usando; nos inquieta conocer cómo y para qué. Se trata de un momento en el que la universidad debe orientarse a la producción de conocimiento soberano y crítico mediado por IA.
¿Qué potencialidades hay en la intersección entre educación y trabajo a partir de la IA generativa y por qué es fundamental su desarrollo y promoción? ¿Qué competencias y habilidades se tornan indispensables? Y más que todo, ¿qué estudiantes estamos formando? Inquietudes que nos surgen desde el campo de la Comunicación en Argentina, tomando como caso de estudio una investigación que se está realizando con estudiantes universitarios de Comunicación y carreras afines.
In contemporary discourse, the pervasive influences of Generative Pre-Trained (GPT) and Large Language Models (LLM) are evident, showcasing diverse applications. GPT-based technologies, transcending ...mere summarization, exhibit adeptness in discerning critical information from extensive textual corpuses. Through prudent extraction of semantically meaningful content from textual representations, GPT technologies engender automated feature extraction, a departure from the fallible manual extraction methodologies. This study posits an innovative paradigm for extracting multidimensional cyber threat-related features from textual depictions of cyber events, leveraging the prowess of GPT. These extracted features serve as inputs for artificial intelligence (AI) and deep learning algorithms, including Convolutional Neural Network (CNN), Decomposition analysis, and Natural Language Processing (NLP)-based modalities tailored for non-technical cyber strategists. The proposed framework empowers cyber strategists or analysts to articulate inquiries regarding historical cyber incidents in plain English, with the NLP-based interaction facet of the system proffering cogent AI-driven insights in natural language. Furthermore, salient insights, often elusive in dynamic visualizations, are succinctly presented in plain language. Empirical validation of the entire system ensued through autonomous acquisition of semantically enriched contextual information concerning 214 major cyber incidents spanning from 2016 to 2023. GPT-based responses on Actor Type, Target, Attack Source (i.e., Country Originating Attack), Attack Destination (i.e., Targeted Country), Attack Level, Attack Type, and Attack Timeline, underwent critical AI-driven analysis. This comprehensive 7-dimensional information gleaned from the corpus of 214 incidents yielded a corpus of 1498 informative outputs, attaining a commendable precision of 96%, a recall rate of 98%, and an F1-Score of 97%.