Working in the Developer Console for Alexa, Audrey worked on ‘hacking’ (or modifying) Alexa’s voice. Through simple commands, it is possible to make the voice whisper, slow down, speed up, or change pitch. It is also possible to change the voice itself, with the opportunity to hear from Ivy, Joanna, Joey, Justin, Kendra, Kimberly, Matthew, Salli. The commands are explained here.
Audrey used sketching as a way to visually map the various ideas we have had and to open new proposals for what a future performance could hold. She also started to collect ideas for points of departure for our design and artistic process. These points include:
- A lot of AI responses are hard coded
- People ask many things to voices assistants that are not planned for (not hard coded)
- Voice assistants are still not very good at understanding human language, generating many bloopers
- Voice assistants lack context and memory
- Data is part of an ecosystem of algorithms, data collection, voice detection, voice to text technology, AWS….
- Corporations benefit from clean and ‘true’ data
- Voice assistants can’t pretend to be human — instead, how can they honor the reality that they are software.
Experiments with VoicePad application, percussion, and voice
Part I: Bonnie + Whale Fall
The smart speaker reads original text, Whale Fall, by writer David Crouse. Bonnie experiments with speaking in unison, reading against the speaker, and creating a percussive soundscape based on long tones.
Part II: Bonnie + Kurt Schwitters’ UrSonate
The smart speaker reads a passage from Schwitters’ landmark sound poem, UrSonate (1922-1932) while Bonnie reads the poem.
In this experiment, Bonnie probes the intersection of music and language, exploring how percussion instruments can imitate and stand in for the human voice and function as an extension of the body. She utilizes and highlights the real-time misunderstandings generated by voice recognition AI technology. Here, she asks the smart speaker to cycle through randomized applications (news, weather, inspirational quotes, Jurassic Bark, reader, Shopping List, My Questions, Metronome Lite, etc.) while she cycles through randomized improvisational musical directives (imitate, accompany, cover, long-tones, various dynamic shadings, pointillistic activity, etc.)
In other experiments not documented above, she utilizes and highlights the real-time misunderstandings generated by voice recognition AI technology. Through both text to speech and speech to text applications, she reveals the smart speaker’s unnerving proclivity to advertise. When the Amazon Echo speaker was asked to record and read back excerpts from Kurt Schwitters’ mid-1920’s landmark sound poem Ursonate, for example, rather than the poet’s nonsense syllables we heard the AI’s closest approximation, yielding words like “Bezos” and “Whole Foods.” Speaking along with the smart speaker (and through her instruments), Bonnie emphasizes the uncanny valley that separates humans and machines, working to hack and manipulate the speaker as a generator of abstract sound poetry and music.
We started this project in October 2019. We meet weekly and combine discussion with making and demos. We will share on this blog elements of our process and work-in-progress images, videos and thoughts.