Experiments with VoicePad application, percussion, and voice
Part I: Bonnie + Whale Fall
The smart speaker reads original text, Whale Fall, by writer David Crouse. Bonnie experiments with speaking in unison, reading against the speaker, and creating a percussive soundscape based on long tones.
Part II: Bonnie + Kurt Schwitters’ UrSonate
The smart speaker reads a passage from Schwitters’ landmark sound poem, UrSonate (1922-1932) while Bonnie reads the poem.
In this experiment, Bonnie probes the intersection of music and language, exploring how percussion instruments can imitate and stand in for the human voice and function as an extension of the body. She utilizes and highlights the real-time misunderstandings generated by voice recognition AI technology. Here, she asks the smart speaker to cycle through randomized applications (news, weather, inspirational quotes, Jurassic Bark, reader, Shopping List, My Questions, Metronome Lite, etc.) while she cycles through randomized improvisational musical directives (imitate, accompany, cover, long-tones, various dynamic shadings, pointillistic activity, etc.)
In other experiments not documented above, she utilizes and highlights the real-time misunderstandings generated by voice recognition AI technology. Through both text to speech and speech to text applications, she reveals the smart speaker’s unnerving proclivity to advertise. When the Amazon Echo speaker was asked to record and read back excerpts from Kurt Schwitters’ mid-1920’s landmark sound poem Ursonate, for example, rather than the poet’s nonsense syllables we heard the AI’s closest approximation, yielding words like “Bezos” and “Whole Foods.” Speaking along with the smart speaker (and through her instruments), Bonnie emphasizes the uncanny valley that separates humans and machines, working to hack and manipulate the speaker as a generator of abstract sound poetry and music.