In an attempt to better understand recognition memory we look at how three approaches (dual processing, signal detection, and global matching) have addressed the probe, the returned signal and the ...decision in four recognition paradigms. These are single-item recognition (including the remember/know paradigm), recognition in relational context, associative recognition, and source monitoring. The contrast, with regards to the double-miss rate (the probability of recognizing neither item in intact and rearranged pairs) and the effect of the oldness of the other member of the test pair, between identifying the old words in test pairs (the relational context paradigm) and first identifying the intact test pairs and then identifying the old words (adding associative recognition to the relational context paradigm) suggests that the retrieval of associative information in the relational context paradigm is unintentional, unlike the retrieval of associative information in associative recognition. It also seems possible that the information that is spontaneously retrieved in single-item recognition, possibly including the remember/know paradigm, is also unintentional, unlike the retrieval of information in source monitoring. Probable differences between intentional and unintentional retrieval, together with the pattern of effects with regards to the double-miss rate and the effect of the other member of the test pair, are used to evaluate the three approaches. Our conclusion is that all three approaches have something valid to say about recognition, but none is equally applicable across all four paradigms.
In the following, without loss of generality, we can assume that Xi-s are already transformed using the log transform. Since the spacings are i.i.d. exponential and Xk+1 is known, the distribution of ...the maximum can be easily calculated. A constraint for a minimum separation of 6 seconds between two successive events was implemented. Since there is no strongly dominant frequency, there was no false positive at this threshold for any of the 10 paradigms.
We summarize several methods we have used to create software and processes for automated methods for content creation for augmented reality, virtual reality, and other 3D medium uses and beyond. We ...utilize processes involving, machine learning semantic segmentation, computer vision geometry recognition for automated texture mapping, photogrammetry 3D reconstruction from 2D images and videogrammetry video content, and more. A practical use in industry is an emphasis for each software example, and many are associated with awards.
Distance measures are core building blocks in time-series analysis and the subject of active research for decades. Unfortunately, the most detailed experimental study in this area is outdated (over a ...decade old) and, naturally, does not reflect recent progress. Importantly, this study (i) omitted multiple distance measures, including a classic measure in the time-series literature; (ii) considered only a single time-series normalization method; and (iii) reported only raw classification error rates without statistically validating the findings, resulting in or fueling four misconceptions in the time-series literature. Motivated by the aforementioned drawbacks and our curiosity to shed some light on these misconceptions, we comprehensively evaluate 71 time-series distance measures. Specifically, our study includes (i) 8 normalization methods; (ii) 52 lock-step measures; (iii) 4 sliding measures; (iv) 7 elastic measures; (v) 4 kernel functions; and (vi) 4 embedding measures. We extensively evaluate these measures across 128 time-series datasets using rigorous statistical analysis. Our findings debunk four long-standing misconceptions that significantly alter the landscape of what is known about existing distance measures. With the new foundations in place, we discuss open challenges and promising directions.
In this article, we present unselfconscious interaction, a conceptual construct that describes a form of interaction with computational artifacts animated by incremental intersections that lead to ...improvements in the relationships among artifacts, environments and people. We draw on Christopher Alexander's notion of goodness of fit and unselfconscious culture, and utilize Erik Stolterman and Mikael Wiberg's concept-driven interaction research to analyze three interaction design concept artifacts to develop the construct of unselfconscious interaction for human–computer interaction. The resulting construct is comprised of the motivation of goodness of fit that is supported by two design qualities we name open-endedness and lived-with. We describe tensions within the construct, the notion of purposeful purposelessness in design and discuss the features that derive from Alexander's unselfconscious culture and are to be considered when designing for goodness of fit: resources, adaptation, ensembles, time and anonymity. Our main contribution in this article lies in the articulation of the construct of unselfconscious interaction.
Although distance learning presents a number of interesting educational advantages as compared to in-person instruction, it is not without its downsides. We first assess the educational challenges ...presented by distance learning as a whole and identify 4 main challenges that distance learning currently presents as compared to in-person instruction: the lack of social interaction, reduced student engagement and focus, reduced comprehension and information retention, and the lack of flexible and customizable instructor resources. After assessing each of these challenges in-depth, we examine how AR/VR technologies might serve to address each challenge along with their current shortcomings, and finally outline the further research that is required to fully understand the potential of AR/VR technologies as they apply to distance learning.
Los paradigmas son fundamentales en la investigación y aún más cuando se ignoran, pues se dan por supuestos. Con el objetivo de obtener perspectivas acerca de los significados y de las implicaciones ...de los paradigmas en el campo de la comunicación; en una línea de trabajo de metainvestigación en comunicación con perspectiva histórica, orientada a identificar el andamiaje intelectual que ha venido definiendo y que define la investigación en comunicación; se ha realizado un análisis de contenido cualitativo de tres publicaciones referentes especializadas en metainvestigación en comunicación, los dos volúmenes del Journal of Communication "Ferment in the field" (1983) y "The future of the field" (1983) y el volumen I de Rethinking Communication, "Paradigm issues" (1989). A partir del análisis de estos volúmenes, se han obtenido perspectivas respecto de la existencia o no de paradigmas en el campo de la comunicación.
Paradigm Shift in Natural Language Processing Sun, Tian-Xiang; Liu, Xiang-Yang; Qiu, Xi-Peng ...
International journal of automation and computing,
06/2022, Volume:
19, Issue:
3
Journal Article
Peer reviewed
Open access
In the era of deep learning, modeling for most natural language processing (NLP) tasks has converged into several mainstream paradigms. For example, we usually adopt the sequence labeling paradigm to ...solve a bundle of tasks such as POS-tagging, named entity recognition (NER), and chunking, and adopt the classification paradigm to solve tasks like sentiment analysis. With the rapid progress of pre-trained language models, recent years have witnessed a rising trend of paradigm shift, which is solving one NLP task in a new paradigm by reformulating the task. The paradigm shift has achieved great success on many tasks and is becoming a promising way to improve model performance. Moreover, some of these paradigms have shown great potential to unify a large number of NLP tasks, making it possible to build a single model to handle diverse tasks. In this paper, we review such phenomenon of paradigm shifts in recent years, highlighting several paradigms that have the potential to solve different NLP tasks.
People who design, use, and are affected by autonomous artificially intelligent agents want to be able to trust such agents—that is, to know that these agents will perform correctly, to understand ...the reasoning behind their actions, and to know how to use them appropriately. Many techniques have been devised to assess and influence human trust in artificially intelligent agents. However, these approaches are typically ad hoc and have not been formally related to each other or to formal trust models. This article presents a survey of algorithmic assurances, i.e., programmed components of agent operation that are expressly designed to calibrate user trust in artificially intelligent agents. Algorithmic assurances are first formally defined and classified from the perspective of formally modeled human-artificially intelligent agent trust relationships. Building on these definitions, a synthesis of research across communities such as machine learning, human-computer interaction, robotics, e-commerce, and others reveals that assurance algorithms naturally fall along a spectrum in terms of their impact on an agent’s core functionality, with seven notable classes ranging from integral assurances (which impact an agent’s core functionality) to supplemental assurances (which have no direct effect on agent performance). Common approaches within each of these classes are identified and discussed; benefits and drawbacks of different approaches are also investigated.