NUK - logo
E-viri
Celotno besedilo
  • Effects of Stuttering and S...
    Thorpe, Morgan C

    01/2023
    Dissertation

    Previous research has shown that speakers have superior memory of past referents within a conversation compared to listeners because they invest more cognitive effort into language planning and execution than listeners (i.e., speaking benefit; Yoon, et al., 2016; 2021). This phenomenon is consistent with the generation effect in memory literature which suggests that memory is enhanced when items are actively generated than when they are passively received. Less explored is the nature of speaking benefit on memory when language production may be challenging, which may be the case for adults who stutter (AWS). Many people who stutter can anticipate when they are about to stutter and thus may try to avoid the upcoming stutter by, for example, switching words. This may require increased attention and cognitive resources in language production. This hypothesis needs empirical investigation, which was the purpose of this study.The goal of our research was to understand the cognitive mechanisms that support language production in AWS. We aimed to examine how the cognitive effort of language production among AWS affects memory representations of past referents relative to adults who do not stutter (AWNS). Thirty-two AWS and 64 AWNS participated in the study. AWNS were further divided into a control group (N=32) and an attention-divided group (N=32). The attention-divided group was designed to simulate the divided attentional demand AWS often experience in language production due to sound or word avoidance. Thus, there were a total of three groups, including 1) AWS, 2) AWNS, and 3) AWNS with an attention-divided task (AWNS-AD). They participated in a referential communication task in which participants and their partner (i.e., experimenter) collaboratively completed a task-based conversation. Participants’ role (speaking vs. listening) and the discourse context (contrast vs. non-contrast) were manipulated in the task. Participants were asked to describe a target picture among the four pictures on the screen to the experimenter during a block of the trials (i.e., speaking) and to identify the picture that the experimenter described for them during another block of the trials (i.e., listening).Our primary interest was the interplay between the referential expressions produced in the communication task and the accuracy in the subsequent memory test. We hypothesized that if AWS required more cognitive resources to avoid a moment of stuttering during language production, then they would show a decreased speaking benefit in their memory compared to AWNS. Because of the sound avoidances assumed in both the AWS and AWNS-AD groups, we hypothesized that memory performance would be similar between AWS and AWNS-AD.In the memory test, we analyzed the accuracy of target items (i.e., the items described during the communication task). The significant main effect of Role (speaker vs. listener) suggests that speakers have superior memory of past referents compared to listeners, consistent with previous findings (Yoon, et al., 2016; 2021). A significant interaction between Group (AWNS-AD vs. AWS) and Role (speaker vs. listener) was driven by a significant speaking benefit in the AWNS-AD but not in the AWS group. The AWNS-AD group had a significantly higher accuracy for memory of past referents as speakers vs. listeners in conversation, whereas AWS showed no difference in their role as a speaker vs. listener for memory of past referents. In other words, AWS’ memory was equally accurate when speaking and listening.In conclusion, all participants in our study were sensitive to the local discourse context, demonstrating more use of modifiers when there was a related image present vs. absent. Interestingly, AWS generally produced more modifiers regardless of local context than participants in the other two groups. Further, in the memory test, participants in both AWNS and AWNS-AD showed a speaking benefit that they remembered past referents better when speaking vs. listening. However, we observed a novel effect that AWS did not benefit from speaking in their memory. This result suggests that AWS may invest more cognitive efforts in both language production and comprehension compared to AWNS. Given that the memory performance of AWS was better than that of AWNS-AD, the possibility that AWS avoid specific difficult sound or words does not require additional cognitive resources in their processes of language production. These preliminary findings lay the foundation for future individual differences research in which we examine how each individual’s strategy to avoid stuttering may interact with their language production and memory.