This article was originally published in Issue 3.23 (Spring 2014)
Audio-Visual Translation: Seeing Voice and Hearing Space in SpokenWeb’s PoetryLab App
I’m in the process of re-reading passages of Slavoj Žižek’s Less Than Nothing and I keep circling around the supplementary relationship he develops between voice and gaze—those two Lacanian partial objects par excellence. Žižek insists that this relationship is properly antagonistic, each object filling in the other’s lack or “blind spot.” In his words, “the voice does not simply persist at a different level with regard to what we see, it rather points towards a gap in the field of the visible, towards the dimension of what eludes our gaze.…[T]heir relationship is mediated by an impossibility: ultimately we hear things because we cannot see everything”. In my work with audio recordings, that question of the relationship between the audio and the visual comes up frequently, and in various formulations: What do we look at when we listen? What gaps or silences are inherent in different auditory media? How might visual content be used to mitigate the silences of audio artifacts and vice versa? If, as Žižek posits, registration brings an event into being through the act of retroactive self-positing, then what types of events are respectively enabled by audio and visual documentation?
Concordia’s SpokenWeb team is tackling these questions by translating speech into image, space into sound, with its new PoetryLab mobile app: a suite of ludic close listening games. Using code from their recently shipped Jarbes game, project director Jason Camlot, designer Christine Mitchell, and programmer Ian Arawjo are bringing the archive to the streets, taking advantage the situational and haptic dimensions that mobile technology affords. PoetryLab draws it source material from recordings of the Sir George Williams University (SGWU) Poetry Reading Series, which ran from 1966 to 1974 in Montreal. The series paired local poets with touring members of the American avant-garde and experimental circles, creating generative sites of cultural exchange. The game will serve as an introduction to the archive by offering three kinds of interactions with its artifacts: 1) Semantic puzzles, in which the user must order strings of audio into complete phrases to match lines in a recorded poem; 2) Prosodic puzzles, in which distorted clips must be matched to their comprehensible counterparts; and 3) Sound Visualization puzzles, in which the user must match visualizations—extrapolations of the wave form—to its original sound clip. Here, the visualizations (pitch curves, spectrographs, amplitude swells, etc.) help supply the missing information in an auditory riddle. The app may also include an audio tour of the Concordia campus (formerly SGWU) to engage locative listening practices. This feature would map archival artifacts onto the spaces in which they were produced, allowing the user to simulate the visual and spatial dimensions of the event that are latent or absent in the recordings. Importantly, PoetryLab embraces the frustration of the partial object by transforming it into ludic potential; while the artifacts themselves will always be incomplete, finding new ways to circle around their silences generates new, playful encounters with the event.
Slavoj Žižek, Less than Nothing: Hegel and the Shadow of Dialectical Materialism (London: Verso, 2013), 670. Print.
Poetry Lab will launch in the fall of 2014. For more information, check out the app’s Tumblr and Twitter feed @1966to1974, and the SpokenWeb site.