Despite recent advancements in utilizing meta-learning for addressing the generalization challenges of graph neural networks (GNN), their performance in argumentation mining tasks, such as argument ...classifications, remains relatively limited. This is primarily due to the under-utilization of potential pattern knowledge intrinsic to argumentation structures. To address this issue, our study proposes a two-stage, pattern-based meta-GNN method in contrast to conventional pattern-free meta-GNN approaches. Initially, our method focuses on learning a high-level pattern representation to effectively capture the pattern knowledge within an argumentation structure and then predicts edge types. It then utilizes a meta-learning framework in the second stage, designed to train a meta-learner based on the predicted edge types. This feature allows for rapid generalization to novel argumentation graphs. Through experiments on real English discussion datasets spanning diverse topics, our results demonstrate that our proposed method substantially outperforms conventional pattern-free GNN approaches, signifying a significant stride forward in this domain.
Artificial Intelligence (AI) has been applied to solve various challenges of real-world problems in recent years. However, the emergence of new AI technologies has brought several problems, ...especially with regard to communication efficiency, security threats and privacy violations. Towards this end, Federated Learning (FL) has received widespread attention due to its ability to facilitate the collaborative training of local learning models without compromising the privacy of data. However, recent studies have shown that FL still consumes considerable amounts of communication resources. These communication resources are vital for updating the learning models. In addition, the privacy of data could still be compromised once sharing the parameters of the local learning models in order to update the global model. Towards this end, we propose a new approach, namely, Federated Optimisation (FedOpt) in order to promote communication efficiency and privacy preservation in FL. In order to implement FedOpt, we design a novel compression algorithm, namely, Sparse Compression Algorithm (SCA) for efficient communication, and then integrate the additively homomorphic encryption with differential privacy to prevent data from being leaked. Thus, the proposed FedOpt smoothly trade-offs communication efficiency and privacy preservation in order to adopt the learning task. The experimental results demonstrate that FedOpt outperforms the state-of-the-art FL approaches. In particular, we consider three different evaluation criteria; model accuracy, communication efficiency and computation overhead. Then, we compare the proposed FedOpt with the baseline configurations and the state-of-the-art approaches, i.e., Federated Averaging (FedAvg) and the paillier-encryption based privacy-preserving deep learning (PPDL) on all these three evaluation criteria. The experimental results show that FedOpt is able to converge within fewer training epochs and a smaller privacy budget.
Although there is no doubt from an empirical viewpoint that reflex mechanisms can contribute to tongue motor control in humans, there is limited neurophysiological evidence to support this idea. ...Previous results failing to observe any tonic stretch reflex in the tongue had reduced the likelihood of a reflex contribution in tongue motor control. The current study presents experimental evidence of a human tongue reflex in response to a sudden stretch while holding a posture for speech. The latency was relatively long (50 ms), which is possibly mediated through cortical-arc. The activation peak in a speech task was greater than in a non-speech task while background activation levels were similar in both tasks, and the peak amplitude in a speech task was not modulated by the additional task to react voluntarily to the perturbation. Computer simulations with a simplified linear mass-spring-damper model showed that the recorded muscle activation response is suited for the generation of tongue movement responses that were observed in a previous study with the appropriate timing when taking into account a possible physiological delay between reflex muscle activation and the corresponding force. Our results evidenced clearly that reflex mechanisms contribute to tongue posture stabilization for speech production.
Orofacial somatosensory inputs may play a role in the link between speech perception and production. Given the fact that speech motor learning, which involves paired auditory and somatosensory ...inputs, results in changes to speech perceptual representations, somatosensory inputs may also be involved in learning or adaptive processes of speech perception. Here we show that repetitive pairing of somatosensory inputs and sounds, such as occurs during speech production and motor learning, can also induce a change of speech perception. We examined whether the category boundary between /ε/ and /a/ was changed as a result of perceptual training with orofacial somatosensory inputs. The experiment consisted of three phases: Baseline, Training, and Aftereffect. In all phases, a vowel identification test was used to identify the perceptual boundary between /ε/ and /a/. In the Baseline and the Aftereffect phase, an adaptive method based on the maximum-likelihood procedure was applied to detect the category boundary using a small number of trials. In the Training phase, we used the method of constant stimuli in order to expose participants to stimulus variants which covered the range between /ε/ and /a/ evenly. In this phase, to mimic the sensory input that accompanies speech production and learning in an experimental group, somatosensory stimulation was applied in the upward direction when the stimulus sound was presented. A control group (CTL) followed the same training procedure in the absence of somatosensory stimulation. When we compared category boundaries prior to and following paired auditory-somatosensory training, the boundary for participants in the experimental group reliably changed in the direction of /ε/, indicating that the participants perceived /a/ more than /ε/ as a consequence of training. In contrast, the CTL did not show any change. Although a limited number of participants were tested, the perceptual shift was reduced and almost eliminated 1 week later. Our data suggest that repetitive exposure of somatosensory inputs in a task that simulates the sensory pairing which occurs during speech production, changes perceptual system and supports the idea that somatosensory inputs play a role in speech perceptual adaptation, probably contributing to the formation of sound representations for speech perception.
With the rapid development of Internet, the online discussion system or social democratic system has become an important and effective vehicle for group decision-making support since it can continue ...collecting the opinions from the public at anytime. To reach a consensus in crowd-scale deliberation, the existing online discussion systems require an experienced human facilitator to navigate and guild the discussion. When human facilitator performs the required facilitation there are several issues such as heavy burden on decision-making, the 24/7 online facilitation, bias on the social issues, etc. To address these issues it is necessary and inevitable to explore intelligent facilitation. For this purpose, we propose a novel machine learning-based method for smart facilitation, in particular the intelligent consensus decision-making support (CDMS) for crowd-scale deliberation. After presenting an overview of the crowd-scale deliberation and the COLLAGREE, the paper details the proposed approach, a machine learning-based framework for CDMS in crowd-scale deliberation. To validate the developed methods the offline evaluation experiments were conducted with the online discussion platform, COLLAGREE. The preliminary experimental results obtained from offline validation demonstrated the feasibility and usefulness of the developed machine learning-based methods for CDMS.
A quick correction mechanism of the tongue has been formerly experimentally observed in speech posture stabilization in response to a sudden tongue stretch perturbation. Given its relatively short ...latency (< 150 ms), the response could be driven by somatosensory feedback alone. The current study assessed this hypothesis by examining whether this response is induced in the absence of auditory feedback. We compared the response under two auditory conditions: with normal versus masked auditory feedback. Eleven participants were tested. They were asked to whisper the vowel /e/ for a few seconds. The tongue was stretched horizontally with step patterns of force (1 N during 1 s) using a robotic device. The articulatory positions were recorded using electromagnetic articulography simultaneously with the produced sound. The tongue perturbation was randomly and unpredictably applied in one-fifth of trials. The two auditory conditions were tested in random order. A quick compensatory response was induced in a similar way to the previous study. We found that the amplitudes of the compensatory responses were not significantly different between the two auditory conditions, either for the tongue displacement or for the produced sounds. These results suggest that the observed quick correction mechanism is primarily based on somatosensory feedback. This correction mechanism could be learned in such a way as to maintain the auditory goal on the sole basis of somatosensory feedback.A quick correction mechanism of the tongue has been formerly experimentally observed in speech posture stabilization in response to a sudden tongue stretch perturbation. Given its relatively short latency (< 150 ms), the response could be driven by somatosensory feedback alone. The current study assessed this hypothesis by examining whether this response is induced in the absence of auditory feedback. We compared the response under two auditory conditions: with normal versus masked auditory feedback. Eleven participants were tested. They were asked to whisper the vowel /e/ for a few seconds. The tongue was stretched horizontally with step patterns of force (1 N during 1 s) using a robotic device. The articulatory positions were recorded using electromagnetic articulography simultaneously with the produced sound. The tongue perturbation was randomly and unpredictably applied in one-fifth of trials. The two auditory conditions were tested in random order. A quick compensatory response was induced in a similar way to the previous study. We found that the amplitudes of the compensatory responses were not significantly different between the two auditory conditions, either for the tongue displacement or for the produced sounds. These results suggest that the observed quick correction mechanism is primarily based on somatosensory feedback. This correction mechanism could be learned in such a way as to maintain the auditory goal on the sole basis of somatosensory feedback.
Musicians tend to have better auditory and motor performance than non-musicians because of their extensive musical experience. In a previous study, we established that loudness discrimination acuity ...is enhanced when sound is produced by a precise force generation task. In this study, we compared the enhancement effect between experienced pianists and non-musicians. Without the force generation task, loudness discrimination acuity was better in pianists than non-musicians in the condition. However, the force generation task enhanced loudness discrimination acuity similarly in both pianists and non-musicians. The reaction time was also reduced with the force control task, but only in the non-musician group. The results suggest that the enhancement of loudness discrimination acuity with the precise force generation task is independent of musical experience and is, therefore, a fundamental function in auditory-motor interaction.
Despite the increasing use of conversational artificial intelligence (AI) in online discussion environments, few studies explore the application of AI as a facilitator in forming problem-solving ...debates and influencing opinions in cross-venue scenarios, particularly in diverse and war-ravaged countries. This study aims to investigate the impact of AI on enhancing participant engagement and collaborative problem-solving in online-mediated discussion environments, especially in diverse and heterogeneous discussion settings, such as the five cities in Afghanistan. We seek to assess the extent to which AI participation in online conversations succeeds by examining the depth of discussions and participants' contributions, comparing discussions facilitated by AI with those not facilitated by AI across different venues. The results are discussed with respect to forming and changing opinions with and without AI-mediated communication. The findings indicate that the number of opinions generated in AI-facilitated discussions significantly differs from discussions without AI support. Additionally, statistical analyses reveal quantitative disparities in online discourse sentiments when conversational AI is present compared to when it is absent. These findings contribute to a better understanding of the role of AI-mediated discussions and offer several practical and social implications, paving the way for future developments and improvements.
This paper presents the design and implementation of an automated multi-phase facilitation agent based on LLM to realize inclusive facilitation and efficient use of a large language model (LLM) to ...facilitate realistic discussions. Large-scale discussion support systems have been studied and implemented very widely since they enable a lot of people to discuss remotely and within 24 hours and 7 days. Furthermore, automated facilitation artificial intelligence (AI) agents have been realized since they can efficiently facilitate large-scale discussions. For example, D-Agree is a large-scale discussion support system where an automated facilitation AI agent facilitates discussion among people. Since the current automated facilitation agent was designed following the structure of the issue-based information system (IBIS) and the IBIS-based agent has been proven that it has superior performance. However, there are several problems that need to be addressed with the IBIS-based agent. In this paper, we focus on the following three problems: 1) The IBIS-based agent was designed to only promote other participants' posts by replying to existing posts accordingly, lacking the consideration of different behaviours taken by participants with diverse characteristics, leading to a result that sometimes the discussion is not sufficient. 2) The facilitation messages generated by the IBIS-based agent were not natural enough, leading to consequences that the participants were not sufficiently promoted and the participants did not follow the flow to discuss a topic. 3) Since the IBIS-based agent is not combined with LLM, designing the control of LLM is necessary. Thus, to solve the problems mentioned above, the design of a phase-based facilitation framework is proposed in this paper. Specifically, we propose two significant designs: One is the design for a multi-phase facilitation agent created based on the framework to address problem 1); The other one is the design for the combination with LLM to address problem 2) and 3). Particularly, the language model called “GPT-3.5” is used for the combination by using corresponding APIs from OPENAI. Furthermore, we demonstrate the improvement of our facilitation agent framework by presenting the evaluations and a case study. Besides, we present the difference between our framework and LangChain which has generic features to utilize LLMs.
Intelligent transportation systems encompass a series of technologies and applications that exchange information to improve road traffic and avoid accidents. According to statistics, some studies ...argue that human mistakes cause most road accidents worldwide. For this reason, it is essential to model driver behavior to improve road safety. This paper presents a Fuzzy Rule-Based System for driver classification into different profiles considering their behavior. The system’s knowledge base includes an ontology and a set of driving rules. The ontology models the main entities related to driver behavior and their relationships with the traffic environment. The driving rules help the inference system to make decisions in different situations according to traffic regulations. The classification system has been integrated on an intelligent transportation architecture. Considering the user’s driving style, the driving assistance system sends them recommendations, such as adjusting speed or choosing alternative routes, allowing them to prevent or mitigate negative transportation events, such as road crashes or traffic jams. We carry out a set of experiments in order to test the expressiveness of the ontology along with the effectiveness of the overall classification system in different simulated traffic situations. The results of the experiments show that the ontology is expressive enough to model the knowledge of the proposed traffic scenarios, with an F1 score of 0.9. In addition, the system allows proper classification of the drivers’ behavior, with an F1 score of 0.84, outperforming Random Forest and Naive Bayes classifiers. In the simulation experiments, we observe that most of the drivers who are recommended an alternative route experience an average time gain of 66.4%, showing the utility of the proposal.