Abstracts

MUSIC, LANGUAGE, AND EMOTION WORKSHOP II:

Friday, March 6, 2014
(click here for schedule)

ISABELLE PERETZ (University of Montreal)
Musical invasion of the neural circuits for vocal emotions

Limbic brain areas (e.g. amygdala, striatum) associated with basic biological functions (sex and pain) are recruited by musical emotions. These observations suggest that music has co-opted the emotional circuits that have evolved for vocalizations.

PSYCHE LOUI (Wesleyan University)
Action and Perception in the Musical Brain

Music is a fundamentally human activity that is celebrated worldwide and from a young age, but why we know and love our music has remained a mystery. Epic successes and failures in the human history of music are informative of the neural systems of perception, production, cognition, and emotion. I will describe behavioral and neuroimaging (MRI, DTI, fMRI, graph theory) studies that use music as a model to understand and to capitalize on the interaction of multiple systems in the human brain. Results suggest that much of what we know and love about music is learned from statistics of sounds in the environment, and that structural and functional connectivity between perceptual and motor systems subserve the musical experiences that may overlap in different extents with linguistic and affective communication. I close with recent advances in the lab that capitalize on the knowledge from music cognition research to aid in the management of abnormal brain activity.

DAVID TEMPERLEY (Eastman School of Music)
Arbitrariness in Language and Music

The linguist Charles Hockett has proposed arbitrariness—the arbitrary association between word forms and meanings—as an essential feature of human language. In this paper I consider whether arbitrariness is also a feature of music. In contrast to earlier treatments of this issue, I bypass the issue of meaning, and simply consider the functions of musical elements—pitches, chords, keys, tempi, etc.—in relation to other levels of representation. The question then becomes, are the functions of musical elements related to their sonic forms in an arbitrary way (meaning that they must be stored in a lexicon or “lookup list”), or in a principled way, meaning that they may be derived using simple principles? While arbitrariness is (by this definition) a matter of degree, I conclude that form-function mappings in music are mostly not arbitrary, and that arbitrariness plays a much greater role in language than it does in music.

DAVID POEPPEL (New York University & Max-Planck-Institute, Frankfurt)
The temporal envelope of speech and music: what might a brain encode?

The information that the ear receives when listening to speech or music has particular amplitude modulation properties. I first quantify and compare the acoustic rhythm of speech and music, focusing on one aspect, the temporal modulations spectrum. A corpus analysis is applied to speech across three languages and to music across several genres, including classical music played by single instruments and ensemble music (symphonic music, rock, and jazz). The analysis reveals remarkably consistent rhythmic properties within speech and music — but clear distinctions between the two categories. The data highlight potentially universal differences between these fundamental domains of human communication. Focusing on speech and language I go on to develop how such rhythmic encoding can form the basis for more abstract, structural processing. New results demonstrate that, during listening to connected speech, cortical activity of different time scales is entrained concurrently to track the time course of linguistic structures at different hierarchical levels. Critically, entrainment to hierarchical linguistic structures is dissociated from the neural encoding of acoustic cues and from processing the predictability of incoming words. The results demonstrate syntax-driven, internal construction of hierarchical linguistic structure via entrainment of hierarchical cortical dynamics.

FRED LERDAHL (Columbia University)
Poetry as Music

The sounds of poetry, like those of music, combine perceptually into hierarchically organized structures. It is therefore possible to treat poetic sounds as if they were music. Using Ray Jackendoff’s and my cognitively oriented music theory along with contemporary work in generative phonology, I explore this idea by developing a rule system that assigns to poetic lines word groupings, stress and metrical grids, syllable durations, intonational contours, and hierarchical patterns of syllabic repetition and contrast. These structures are illustrated through the derivation of an analysis of a short poem by Robert Frost, Fire and Ice. Three audio readings of the poem are compared to the analysis. In addition to providing a systematic method of poetic analysis, this study suggests what structural features poetry and music do and do not share.

MARGARET SCHEDEL (Stony Brook University)
Tattoo of a Gesture

In 2013, percussionist Patti Cudd asked me to write her a piece which could travel well and used her twenty-inch bendir as a focal point. This constraint dictated the orchestration—a small bongo rounds out the membraneaphones while elephant bells, singing bowls and cymbals create a metallic orchestra. Inspired by the range of sounds Xenakis and Gordon were able to coax out of simple 2×4’s, I included three slats of wood cut to fit the dimensions of her suitcase with various treatments including moleskin, sandpaper, and drilled holes. A number of striking and muting implements allow me to generate a wide variety of sounds and textures from this small set of instruments which are then processed and augmented by the electronics.

The piece has 9 movements: I gave Patti ten phrases, she chose one for the title and ordered the remaining nine. Each of the movements uses the same processing techniques in the same order, although they can be compressed to 1.5 minutes or develop over 6.5 minutes. These fixed positions define the form, but the live percussion projects through these potentialities in very different ways. The first step of the processing acts almost like a sieve—only allowing certain sets of frequencies at specific times in to be analyzed and manipulated by the computer. In this way, each movement shows a different angle to the piece; the whole is only suggested, heard behind a tattered veil.

My good friend David Wetzel realized the electronic portion using his Interactive Event Manager (IEM), a scriptable, modular environment for interactive computer music. I designed the electronics knowing the capabilities of his system. Many thanks to Christopher Howard, a doctoral percussion student at Stony Brook University, who helped with the development of the percussion mechanics and techniques.

FALL 2014 MONTHLY SPEAKER SERIES

Wednesday, December 10, 2014  5:00pm

JOHN DRURY (Department of Linguistics, Stony Brook University)
Music, Language, Math, and Visual Narrative: In Search of Shared/Distinct Neurocognitive Mechanisms

Any short-list of what makes humans special in the biological world arguably must make reference to: (i) language, (ii) music, (iii) precise number and mathematics, and (iv) visual narrative (i.e., our ability to extract a story from sequences of images). Electrophysiological research has found superficially similar brain response profiles for analogous types of stimulus manipulations across these domains. Further, studies examining language and music together have offered data consistent with overlap of at least some of the underlying processing circuitry. Such cross-domain studies have made it clear that we have much to gain by examining systems like language and music together. However, a broader look at recent work examining language, music, and number/math reveals a number of conflicting findings and unresolved puzzles, with some data pointing to overlapping mechanisms and other data suggesting there are distinct mechanisms at work across these domains. In ongoing ERP studies we examine visual processing of comics, sentences, and number sequences with simultaneous auditory presentation of music (chord progressions) to investigate patterns of processing interference across these domains. In this talk I’ll present some preliminary data which already show considerable promise for efforts to identify/individuate brain response profiles with respect to whether they may index shared (domain general) versus distinct (domain specific) underlying mechanisms

Tuesday, November 4, 2014  5:30pm in SBS S207 (Linguistics Dept. Seminar Room)

LEE BLACKSTONE (Department of Sociology, SUNY Old Westbury)
Webs of Meaning:  The Sociology of Music and an Interpretation of Southern Italian Tarantism

The ancient music of the tarantism ritual, a ceremony to treat a supposed ‘spider bite,’ is a particular cultural formation of the southern Salento region of Italy. The music known as the pizzica tarantata allowed southern Italians to express, and endure, their conditions of hardship and alienation.  In this context, music is a healing medium against social alienation. The trajectory of tarantism’s history, however, has been fraught, and religious opinions as well as the advance of modernity resulted in a ‘painful listening’ experience.  Today, the past has been superseded by cultural activism and commercialism, moving the once stigmatic music towards social acceptability. This example allows us to examine how tradition and local culture are shaped by history, and how music may be transformed and reclaimed again for political and cultural resistance. My discussion will consist of the following: first, a consideration of music and its relevance to society; second, a discussion of the concept of ‘noise’ and how it bridges concerns about music and deviance; and finally, a discussion of tarantism as found in the Salento.

Tuesday, October 7, 2014  5:30pm in SBS S207 (Linguistics Dept. Seminar Room)

ANDREW CONKLIN (Department of Music, Stony Brook University)
A grid-based look at syncopation in popular music

Although researchers are generally cautious when attempting to draw fundamental connections between music and language, there is a general consensus that meter plays a role in both domains. If metrical grids exist in both language and music, any musical setting of text must involve two independent, interacting meters. In this talk, I outline a view of syncopation in popular music that focuses on the interaction between linguistic and musical metrical grids. I build on David Temperley’s (1999, 2001) research concerning syncopation in rock music by proposing a distinction between internal syncopation, which describes a conflict between musical rhythm and meter, and external syncopation, in which the accent patterns of the text are misaligned with the underlying musical metrical grid. I follow with a case study analysis of the Beach Boys’ “God Only Knows,” which explores how some of Temperley’s ideas might be applied in the context of a traditional musical analysis.

MUSIC, LANGUAGE AND EMOTION WORKSHOP: April 12, 2013

For videos, please click here.

MICHAEL SCHOBER & NETA SPIRO:
How much do jazz players share understanding of their performance?  A case study

When musicians collaborate, they predict, perceive, and react to what their partners do in complex ways.  To what extent and in what arenas do they need to understand what they are doing in the same way?  There must be at least some intersubjective agreement if musicians are to follow a shared rhythm and create a plausible joint performance, but must everything be agreed upon?  The aim in this case study was to explore the extent to which a pair of experienced jazz musicians understand what they have done together in the same way:  whether they spontaneously generate the same descriptions of their performances and intentions, and when they do not, whether they agree with their partner’s characterization of what happened and what was intended.  The high quality of the performances combined with the disparities in agreement suggest that, at least in this case study, fully shared understanding of what happened is not essential for successful improvisation. The fact that the performers endorsed an expert listener’s statements more than (some of) their partner’s suggests that the performers’ interpretations may not be privileged relative to an outsider’s, at least in some aspects.

ARNIE COX: Tripartite Subjectivity in Music Listening

On the face of it, music listeners adopt a second-person position in relation to the first-person position of performers.  However, recent empirical work related to perception indicates that music perception involves vicarious performance of one form or another, whether we refer to it as simulation, physical empathy, or mimetic participation.  One implication is that listeners thus simultaneously adopt a quasi-first-person position as vicarious performers which results in a bipartite subjectivity in music listening.  But at the same time, listeners also reflect upon and conceptualize their experience in real time, to varying degrees, and in doing so adopt a quasi-third-person position in which one “steps back” from immediate immersion and adopts an analytical position.  The composite result is then a tripartite subjectivity in music listening.

This three-part view bears fundamental implications for the nature of the listening experience, of which I will focus on two.  The first concerns the nature of musical tension, which we can understand not as an objective property of the musical sounds but as something that emerges in tripartite experience—as something contingent not only upon the sounds but also upon our mimetic participation (quasi-first-person) and low-level conceptualization (quasi-third-person).  The second implication concerns how mimetic participation relates to the “singability” and “danceability” of most music, and how this in turn bears upon the experience and evaluation of “modernist” music.  Aversive responses to such music can be understood to indicate the implicit value that many listeners place upon easy quasi-first-person engagement with music.  A view of the listening experience as tripartite thus offers a way of understanding some of the bases of aesthetic preference and value.

CAROL LYNNE KRUMHANSL: Music and Memory

The topic of my talk is how the brain remembers so much music and its strong personal associations. I will present two recent studies using popular music. The first study focused specifically on the detail of musical memories. Very short clips of top hits from the last five decades were recognized frequently; listeners knew artist, title, decade, style, and emotional content.  This implies that musical memories are so detailed that they can match the clip to a large library of remembered music. The second study, with a more systematic sampling of music, focused on autobiographical memories. As would be expected, more recent music evoked the most memories. However, there was a bump for music of their parents’ generation, and also perhaps for their grandparents’ music. Previous research shows life-long preferences for music from late adolescence and early adulthood, the “reminiscence bump”. This study shows that the reminiscence bump may transmit across generations.

JOHN DRURY & NICOLE CALMA: Revisiting Shared Resources for Language and Music

To what extent do language and music involve shared underlying brain mechanisms? In this talk we present new data relevant to our understanding of the shared syntactic integration resource hypothesis (SSIRH; Patel 2003). The SSIRH claims that although musical and linguistic representations are stored separately in the brain, structural (“syntactic”) processing for these domains is supported by shared mechanisms.  One kind of evidence in favor of the SSIRH comes from studies using event-related brain potentials (ERPs), which are extracted from scalp measured brain activity (EEG) time-locked to the presentation of stimuli.  In language, disruptions of syntactic processing (e.g., “He will KICKED the ball”) can elicit a two part negative/positive complex, constituted by a negative shift with a left/anterior distribution on the scalp (“LAN”) followed by a large, more posterior, positive wave (“P600”). In ERP studies designed to target “musical syntax” (e.g., out-of-key notes, incongruent chord progressions, etc.), response patterns have been reported which are strikingly similar to those seen for linguistic syntactic violations. Importantly, studies looking at simultaneous processing of language and music have yielded results consistent with shared mechanisms underlying the anterior negative-going ERP effects. However, to our knowledge we have no comparable evidence for P600 effects. We argue that previous language/music P600 data is compatible with either shared or distinct underlying systems. In the present study we elicited P600 effects for language, music, and “double” language/music violations, demonstrating a pattern indicating distinct underlying neural generators. We conclude that though the SSIRH may be correct, the way P600 effects relate to the model must be rethought.

ROBERT SLEVC: Common structural processes in language and music

Language and music are both richly structured systems, in which discrete elements are integrated into hierarchical relationships over time.  Growing evidence suggests that these systems share some organizational principles and might also draw on similar cognitive resources.  I will discuss some evidence from my lab and others that, while the representations involved in linguistic and musical structure may differ (e.g., parts of speech in language; chord functions in music), real-time processing of language and music do in fact draw on shared resources.  Original conceptions of this relationship was of a shared syntactic resource, however more recent data suggests shared processing is involved in other kinds of time-spanning structures as well (e.g., semantic ambiguity in language can also interact with musical manipulations).  Thus the question becomes exactly what sort of common cognitive mechanisms are involved in the processing of linguistic and musical structure?  I will argue for the role of cognitive control as part of the basic (non-syntax-specific) cognitive mechanisms underlying both language and music perception.  Cognitive control refers to the process(es) involved in detecting and resolving conflict that occurs, e.g., when expectations are violated.  Growing work suggests that cognitive control plays an important role in aspects of linguistic syntactic processing, and I will discuss some recent work suggesting that cognitive control mechanisms are also recruited to process musical structure.  These data take a step toward a more specific account of exactly what shared integration processes underlie processing of language and of music.

DAVID PESETSKY: Language and Music: same structures, different building blocks

This talk reports on joint work with Jonah Katz.

Is there a special kinship between music and language?  Both are complex, law-governed cognitive systems,  Both are universal across the human species, but show some variation from culture to culture.  Do the similarities run deeper than this? Although there is a rich tradition of speculation on this question, the current consensus among researchers is quite cautious. In this talk, we offer a linguist’s perspective on the issue — and argue against the cautious consensus.  Though the formal properties of music and language do differ, we propose that these differences reflect what is obvious: that the fundamental building blocks of language and music are different (for example: words vs. pitches). In all other respects, however – what they do with these building blocks – language and music are identical.  We call this proposal the Identity Thesis for Language and Music.

In particular, we propose, developing and extending earlier proposals by Lerdahl & Jackendoff (1983), that music, like language, contains a syntactic component in which headed structures are built by the same recursive rule that is also central to linguistic syntax, the rule called Merge.  Time permitting, we will present some pilot experimental results that bear on this proposal.  We further argue that the species of Merge known to linguists as syntactic movement (= Internal Merge) is also found in music, and is a crucial element in the main key-defining device of Western tonal music: the perfect cadence.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s