Music perception depends on internal psychological models derived through exposure to a musical culture. It is hypothesized that this musical enculturation depends on two cognitive processes: (1) ...statistical learning, in which listeners acquire internal cognitive models of statistical regularities present in the music to which they are exposed; and (2) probabilistic prediction based on these learned models that enables listeners to organize and process their mental representations of music. To corroborate these hypotheses, I review research that uses a computational model of probabilistic prediction based on statistical learning (the information dynamics of music (IDyOM) model) to simulate data from empirical studies of human listeners. The results show that a broad range of psychological processes involved in music perception—expectation, emotion, memory, similarity, segmentation, and meter—can be understood in terms of a single, underlying process of probabilistic prediction using learned statistical models. Furthermore, IDyOM simulations of listeners from different musical cultures demonstrate that statistical learning can plausibly predict causal effects of differential cultural exposure to musical styles, providing a quantitative model of cultural distance. Understanding the neural basis of musical enculturation will benefit from close coordination between empirical neuroimaging and computational modeling of underlying mechanisms, as outlined here.
It is hypothesized that this musical enculturation depends on two cognitive processes: statistical learning and probabilistic prediction. Here, I review research using the information dynamics of music model of probabilistic prediction based on statistical learning to suggest a single process underlying a broad range of psychological processes involved in music perception.
Listening to music often evokes intense emotions 1, 2. Recent research suggests that musical pleasure comes from positive reward prediction errors, which arise when what is heard proves to be better ...than expected 3. Central to this view is the engagement of the nucleus accumbens—a brain region that processes reward expectations—to pleasurable music and surprising musical events 4–8. However, expectancy violations along multiple musical dimensions (e.g., harmony and melody) have failed to implicate the nucleus accumbens 9–11, and it is unknown how music reward value is assigned 12. Whether changes in musical expectancy elicit pleasure has thus remained elusive 11. Here, we demonstrate that pleasure varies nonlinearly as a function of the listener’s uncertainty when anticipating a musical event, and the surprise it evokes when it deviates from expectations. Taking Western tonal harmony as a model of musical syntax, we used a machine-learning model 13 to mathematically quantify the uncertainty and surprise of 80,000 chords in US Billboard pop songs. Behaviorally, we found that chords elicited high pleasure ratings when they deviated substantially from what the listener had expected (low uncertainty, high surprise) or, conversely, when they conformed to expectations in an uninformative context (high uncertainty, low surprise). Neurally, we found using fMRI that activity in the amygdala, hippocampus, and auditory cortex reflected this interaction, while the nucleus accumbens only reflected uncertainty. These findings challenge current neurocognitive models of music-evoked pleasure and highlight the synergistic interplay between prospective and retrospective states of expectation in the musical experience.
Display omitted
•Musical pleasure depends on prospective and retrospective states of expectation•A machine-learning model quantified the uncertainty and surprise of pop song chords•Chords with low uncertainty and high surprise, and vice versa, evoked high pleasure•Joint effects of uncertainty and surprise found in the amygdala and auditory cortex
Cheung et al. use a machine-learning model to mathematically quantify the predictive uncertainty and surprise of 80,000 chords in 745 commercially successful pop songs. The authors further show that chord uncertainty and surprise jointly modulate musical pleasure, as well as activity in the amygdala, hippocampus, and auditory cortex using fMRI.
We use behavioral methods, magnetoencephalography, and functional MRI to investigate how human listeners discover temporal patterns and statistical regularities in complex sound sequences. ...Sensitivity to patterns is fundamental to sensory processing, in particular in the auditory system, because most auditory signals only have meaning as successions over time. Previous evidence suggests that the brain is tuned to the statistics of sensory stimulation. However, the process through which this arises has been elusive. We demonstrate that listeners are remarkably sensitive to the emergence of complex patterns within rapidly evolving sound sequences, performing on par with an ideal observer model. Brain responses reveal online processes of evidence accumulation—dynamic changes in tonic activity precisely correlate with the expected precision or predictability of ongoing auditory input—both in terms of deterministic (first-order) structure and the entropy of random sequences. Source analysis demonstrates an interaction between primary auditory cortex, hippocampus, and inferior frontal gyrus in the process of discovering the regularity within the ongoing sound sequence. The results are consistent with precision based predictive coding accounts of perceptual inference and provide compelling neurophysiological evidence of the brain’s capacity to encode high-order temporal structure in sensory signals.
Music ranks among the greatest human pleasures. It consistently engages the reward system, and converging evidence implies it exploits predictions to do so. Both prediction confirmations and errors ...are essential for understanding one's environment, and music offers many of each as it manipulates interacting patterns across multiple timescales. Learning models suggest that a balance of these outcomes (i.e., intermediate complexity) optimizes the reduction of uncertainty to rewarding and pleasurable effect. Yet evidence of a similar pattern in music is mixed, hampered by arbitrary measures of complexity. In the present studies, we applied a well-validated information-theoretic model of auditory expectation to systematically measure two key aspects of musical complexity: predictability (operationalized as information content IC), and uncertainty (entropy). In Study 1, we evaluated how these properties affect musical preferences in 43 male and female participants; in Study 2, we replicated Study 1 in an independent sample of 27 people and assessed the contribution of veridical predictability by presenting the same stimuli seven times. Both studies revealed significant quadratic effects of IC and entropy on liking that outperformed linear effects, indicating reliable preferences for music of intermediate complexity. An interaction between IC and entropy further suggested preferences for more predictability during more uncertain contexts, which would facilitate uncertainty reduction. Repeating stimuli decreased liking ratings but did not disrupt the preference for intermediate complexity. Together, these findings support long-hypothesized optimal zones of predictability and uncertainty in musical pleasure with formal modeling, relating the pleasure of music listening to the intrinsic reward of learning.
Abstract pleasures, such as music, claim much of our time, energy, and money despite lacking any clear adaptive benefits like food or shelter. Yet as music manipulates patterns of melody, rhythm, and more, it proficiently exploits our expectations. Given the importance of anticipating and adapting to our ever-changing environments, making and evaluating uncertain predictions can have strong emotional effects. Accordingly, we present evidence that listeners consistently prefer music of intermediate predictive complexity, and that preferences shift toward expected musical outcomes in more uncertain contexts. These results are consistent with theories that emphasize the intrinsic reward of learning, both by updating inaccurate predictions and validating accurate ones, which is optimal in environments that present manageable predictive challenges (i.e., reducible uncertainty).
Previous studies of auditory expectation have focused on the expectedness perceived by listeners retrospectively in response to events. In contrast, this research examines predictive uncertainty-a ...property of listeners' prospective state of expectation prior to the onset of an event. We examine the information-theoretic concept of Shannon entropy as a model of predictive uncertainty in music cognition. This is motivated by the Statistical Learning Hypothesis, which proposes that schematic expectations reflect probabilistic relationships between sensory events learned implicitly through exposure. Using probability estimates from an unsupervised, variable-order Markov model, 12 melodic contexts high in entropy and 12 melodic contexts low in entropy were selected from two musical repertoires differing in structural complexity (simple and complex). Musicians and non-musicians listened to the stimuli and provided explicit judgments of perceived uncertainty (explicit uncertainty). We also examined an indirect measure of uncertainty computed as the entropy of expectedness distributions obtained using a classical probe-tone paradigm where listeners rated the perceived expectedness of the final note in a melodic sequence (inferred uncertainty). Finally, we simulate listeners' perception of expectedness and uncertainty using computational models of auditory expectation. A detailed model comparison indicates which model parameters maximize fit to the data and how they compare to existing models in the literature. The results show that listeners experience greater uncertainty in high-entropy musical contexts than low-entropy contexts. This effect is particularly apparent for inferred uncertainty and is stronger in musicians than non-musicians. Consistent with the Statistical Learning Hypothesis, the results suggest that increased domain-relevant training is associated with an increasingly accurate cognitive model of probabilistic structure in music.
Following in a psychological and musicological tradition beginning with Leonard Meyer, and continuing through David Huron, we present a functional, cognitive account of the phenomenon of expectation ...in music, grounded in computational, probabilistic modeling. We summarize a range of evidence for this approach, from psychology, neuroscience, musicology, linguistics, and creativity studies, and argue that simulating expectation is an important part of understanding a broad range of human faculties, in music and beyond.
Perceptual pleasure and its concomitant hedonic value play an essential role in everyday life, motivating behavior and thus influencing how individuals choose to spend their time and resources. ...However, how pleasure arises from perception of sensory information remains relatively poorly understood. In particular, research has neglected the question of how perceptual representations mediate the relationships between stimulus properties and liking (e.g., stimulus symmetry can only affect liking if it is perceived). The present research addresses this gap for the first time, analyzing perceptual and liking ratings of 96 nonmusicians (power of 0.99) and finding that perceptual representations mediate effects of feature‐based and information‐based stimulus properties on liking for a novel set of melodies varying in balance, contour, symmetry, or complexity. Moreover, variability due to individual differences and stimuli accounts for most of the variance in liking. These results have broad implications for psychological research on sensory valuation, advocating a more explicit account of random variability and the mediating role of perceptual representations of stimulus properties.
Appreciation relies on perceiving sensory information. Ninety‐six nonmusicians rated the balance, contour, symmetry, complexity, unpredictability, and liking for a novel set of musical stimuli. Structural equation modeling (SEM) revealed that perceptual representations mediate the impact of stimulus properties on liking, with substantial variability between and within participants and stimuli. These findings underscore the importance of this neglected relationship and individual differences in understanding the psychological mechanisms of sensory evaluation.
We present the results of a study testing the often-theorized role of musical expectations in inducing listeners’ emotions in a live flute concert experiment with 50 participants. Using an audience ...response system developed for this purpose, we measured subjective experience and peripheral psychophysiological changes continuously. To confirm the existence of the link between expectation and emotion, we used a threefold approach. (1) On the basis of an information-theoretic cognitive model, melodic pitch expectations were predicted by analyzing the musical stimuli used (six pieces of solo flute music). (2) A continuous rating scale was used by half of the audience to measure their experience of unexpectedness toward the music heard. (3) Emotional reactions were measured using a multicomponent approach: subjective feeling (valence and arousal rated continuously by the other half of the audience members), expressive behavior (facial EMG), and peripheral arousal (the latter two being measured in all 50 participants). Results confirmed the predicted relationship between high-information-content musical events, the violation of musical expectations (in corresponding ratings), and emotional reactions (psychologically and physiologically). Musical structures leading to expectation reactions were manifested in emotional reactions at different emotion component levels (increases in subjective arousal and autonomic nervous system activations). These results emphasize the role of musical structure in emotion induction, leading to a further understanding of the frequently experienced emotional effects of music.
Musical Aesthetic Sensitivity Clemente, Ana; Pearce, Marcus T.; Nadal, Marcos
Psychology of aesthetics, creativity, and the arts,
02/2022, Letnik:
16, Številka:
1
Journal Article
Recenzirano
Odprti dostop
Empirical aesthetics has mainly focused on general and simple relations between stimulus features and aesthetic appreciation. Consequently, to explain why people differ so much in what they like and ...prefer continues to be a challenge for the field. One possible reason is that people differ in their aesthetic sensitivity, that is, the extent to which they weigh certain stimulus features. Studies have shown that people vary substantially in their aesthetic sensitivities to visual balance, contour, symmetry, and complexity and that this variation explains why people like different things. Our goal here was to extend this line of research to music and examine aesthetic sensitivity to musical balance, contour, symmetry, and complexity. Forty-eight nonmusicians rated their liking for 96 4-s Western tonal musical motifs, arranged in four subsets varying in balance, contour, symmetry, or complexity. We used linear mixed-effects models to estimate individual differences in the extent to which each musical attribute determined their liking. The results showed that participants differed remarkably in the extent to which their liking was explained by musical balance, contour, symmetry, and complexity. Furthermore, a retest after 2 weeks showed that this measure of aesthetic sensitivity is reliable and suggests that aesthetic sensitivity is a stable personal trait. Finally, cluster analyses revealed that participants divided into two groups with different aesthetic sensitivity profiles, which were also largely stable over time. These results shed light on aesthetic sensitivity to musical content and are discussed in relation to comparable existing research in empirical aesthetics.
Simultaneous consonance is a salient perceptual phenomenon corresponding to the perceived pleasantness of simultaneously sounding musical tones. Various competing theories of consonance have been ...proposed over the centuries, but recently a consensus has developed that simultaneous consonance is primarily driven by harmonicity perception. Here we question this view, substantiating our argument by critically reviewing historic consonance research from a broad variety of disciplines, reanalyzing consonance perception data from 4 previous behavioral studies representing more than 500 participants, and modeling three Western musical corpora representing more than 100,000 compositions. We conclude that simultaneous consonance is a composite phenomenon that derives in large part from three phenomena: interference, periodicity/harmonicity, and cultural familiarity. We formalize this conclusion with a computational model that predicts a musical chord's simultaneous consonance from these three features, and release this model in an open-source R package, incon, alongside 15 other computational models also evaluated in this paper. We hope that this package will facilitate further psychological and musicological research into simultaneous consonance.