Thesis

Week 2: Daily Sketches

Day 1: A song that represents the mood or vibe of my thesis world:
Instinct tells me that birdsong is too obvious and overdone, but compared to other avian-inspired classical music or hours of forest and jungle soundscapes, I am intrigued by the acoustic dialogue in this recording.

Day 2: What are the worst possible ways to address my thesis statement or question?

Everyone wears voice changer masks

A precursor for any kind of animal-watching event (btw, this woman is amazing!)

A designated area in which a person enters, submits a form of communication for translation, and because feedback is obscure, confusing, or irrelevant, they leave immediately

Day 3: What are five images that represent the world or mood of your thesis?

Sources: Image 1, Image 2, Image 3, Image 4, Image 5

Some sounds to add here, as well:
Wind Chimes With Light Wind - Ambient Soundscape
I heard there was a secret chord - 360 video

I also asked people to share some of the favorite sounds with me today. I heard: birds chirping, trains revving, skateboards rolling, children laughing, bubbles popping, cats meowing, underwater gurgling… and cooking shows.

Keywords/ideas from today’s sketch: outside, nature, generative design, places for people to come together, unexpected but welcome, play

Day 4: What are give questions that I’ll investigate with my thesis project?

  1. Can you create a shared new language on the fly?

  2. What is lost in translation? What is gained?

  3. From where does meaning arise—in the understanding or in the effort of trying?

  4. What is the least number of steps necessary to learn to express yourself with a new tool?

  5. How do you create an environment that invites curiosity and welcomes sustained, playful engagement?

Day 5: What are three possible venues for this work to be shown? Why do I want to be part of these institutions or venues?

  1. Online - so that anyone, regardless of their physical location, can participate; also it makes implementation in the locations below much easier.

  2. Public squares or parks - because this could be the best unexpected happening on your routine commute home in the spring. (Hey now, I can request a special event permit for a city park!)

  3. Traditional gallery spaces - because it's even more fun to be raucous in places usually reserved for quiet contemplation.

Day 6: Three experts or types of people (not fictional nor deceased) to speak to about my thesis.
Hmmm…it feels that I’ve hit an impasse in my thinking about this project. Cannot provide a list of possible experts without a concrete direction. Will return here when that resolves. 🤞

Week 2: Experiment 1 Playtesting

Screen Shot 2019-02-08 at 10.11.25 PM.png

Good news! I was on the right track with my code last week; turns out that I didn’t need additional Tone.js functionality to achieve my goal of playing individual synth tones without looping. Luisa explained to me that Tone.js is well-suited, among many things, for looping sequences, and she suggested that I just use JavaScript’s setTimeout() method instead. So simple, I love it. Thank you, Luisa!

I connected all the pieces and built out two chat rooms to playtest on the floor. Both rooms use the structure of my Chat Woids project from last semester’s Computational Typography class, and offer a very plain interface:

Screen+Shot+2019-02-08+at+10.45.28+PM.jpg

Playtest #01
The first chat room counts the number of words (or characters separated by spaces) and for each word plays a tone at a random frequency for a random duration. Due to the structure of my socket server, the random tone that the typer hears is a different from the random tone that another connected user hears. This was an afterthought, but I decided to go with it anyway in the interest of time. After connected participants press enter their words disappear and the tones start playing. Their screens remains blank. While it’s still deployed, try it here!

Playtest #02
This next iteration sonifies words in the same way, except that this time typers’ words fly across in the screen in a flocking formation (visuals only demo here). While it’s still deployed, try it here!

Observations & Notes
I tested both sketches with two pairs of people on the floor. I brought two people close to one another and directed each of them to my server on their laptops. Then I told them to begin. That was it. No other instructions. Afterwards I asked, “What did you notice?” and “What did you think of?” Here are some rough notes from our followup conversations:

In both cases the expectation of a communication device was clear.

There was reporting of an initial curiosity and focus on how words were translated (the underlying structure). Different theories were tested. Was it sentiment analysis? Were their similarities in the sounds that might lend themselves to some shared word use? Was it related to the number or kind of letters? Was it related to the number of spaces? Or the number of words?

One person noticed and was thrown off that the resulting sounds were different on the different computers.

In both instances there was a immediate desire to communicate with their partner; they wanted their partner to understand them. Sometimes that meant pausing and allowing time for the person to respond.

One person reported that when they realized their words were not appearing, they felt tempted to share and reveal more than they normally would, but then held back in case there was going to be a “reveal” at some point and also out of concern that my server was recording their input.

This freedom of anonymity paired with a worry of surveillance was mentioned in both groups.

It didn’t take long for participants to realize that the futility of decoding their word sounds—less than a minute maybe? Some kept typing at a normal text-chatting pace, while others focused on making as many sounds as quickly as possible.

With the second iteration, the sound was secondary—again for both groups. There was confusion or amusement to the upside down and flying words. One group reported preferring Playtest #01 to #02; the other group decided that their preference would depend on the context (and our conversation shifted before we could elaborate on this).

Both groups chatted in tones for several minutes without interruption—which I did eventually out of respect for their time and our surrounding neighbors.

Takeaways
Next time let folks “chat” until they get bored. This might mean creating a designated area out of respectful earshot of others.

I realized that I (maybe a collective we) often take for granted that we’ll be (mostly) understood by another in conversation. And also that the desire to connect with others is a basic human instinct. So when presented with a communication device that obstructs this instinct, there’s an immediate push to figure out why and how to overcome the obstacle. There’s a tension in this disruption to explore here.

Also, a design question: how to address concerns that my server is recording participants’ inputs?

Screen Shot 2019-02-08 at 10.13.28 PM.png

Week 1: Experiment 1

Or an attempt at one. After my research this week, I decided to make a simple app to test the experience of texting words into sounds. Nothing fancy. I just wanted a sketch to emit a different sound for each word submitted and if time, integrate that into a simple chatroom using web sockets.

I quickly realized that I knew next to nothing about making and working with sound in the browser. I taught myself about synths, envelopes, frequencies, octaves, amplitudes, and more. Remembering that the music peeps on the floor like Tone.js, I found some starter sketches on The Code of Music to help me with that library—thank you, Luisa!

Though I very much enjoy technical challenges, working with musical terminology and coding specific to this area took longer than I expected.

Currently my sketch can:

  • Receive words through an input field, count those words, and print the quantity in the console. Eventually this amount will pass into a function to emit the same number of tones.

  • Until then, a mouse press triggers three tones at random frequencies, each of a different duration, in order, and do not loop.

  • …but only after the first press, and then I hear nothing. However, I can see evidence in console that the function is executing properly.

  • Best guess right now is that I need to use objects and that with a class I can create a new Audio Context upon every mouse press (or eventually when new words are submitted). I need to spend some time learning more about the Web Audio API and talk with music people.

Most important, here are the questions that this process raised for me:

  • Um, do I really want to work with material and methods with which I’ve had very little experience so far (natural language processing and sound)?

  • How will participants know that the sounds are from their texts? They need immediate feedback from their actions to care. I found a video interview with Werthein in which he describes the development Samba Surdo project. He noted that people with sight wanted to see where the sound was originating.

  • It’s one thing to generate random sounds from words, but if it’s always random, then there is no meaning in the sonic translation. What if for each new word the associated random generated sound was saved and a participant constructs a new audible language—an abstract sound lexicon—exploring and learning as they go?

  • Does each new participant have their own “animal” sounds? I suppose I can represent this by placing users in different frequency ranges for now.

  • What if participants build a new audible language together?

  • What if the language is already coded for them, and participants need to figure out their words’ sounds?

  • Right now, my program recognizes a “word” as surrounded by spaces. What’s to keep folks from typing in gibberish? What if they use emojis?

  • Speaking to the above research, are sounds somehow related to the meaning of words? If so, how do I make that clear and understandable to participants? And again, what about gibberish submissions?

  • In general, what’s the story arc of the experience?

  • How might participants hold conversations if they are unsure how others’ texts are translated? How might I offer expressive possibilities with sound to convey intent and/or meaning?

Next Steps
Finish this chatroom prototype! Visit Luisa in office hours for assistance and her advice on moving forward on a sound-related project (scheduled).