Week 2: Daily Sketches

Day 1: A song that represents the mood or vibe of my thesis world:
Instinct tells me that birdsong is too obvious and overdone, but compared to other avian-inspired classical music or hours of forest and jungle soundscapes, I am intrigued by the acoustic dialogue in this recording.

Day 2: What are the worst possible ways to address my thesis statement or question?

Everyone wears voice changer masks

A precursor for any kind of animal-watching event (btw, this woman is amazing!)

A designated area in which a person enters, submits a form of communication for translation, and because feedback is obscure, confusing, or irrelevant, they leave immediately

Day 3: What are five images that represent the world or mood of your thesis?

Sources: Image 1, Image 2, Image 3, Image 4, Image 5

Some sounds to add here, as well:
Wind Chimes With Light Wind - Ambient Soundscape
I heard there was a secret chord - 360 video

I also asked people to share some of the favorite sounds with me today. I heard: birds chirping, trains revving, skateboards rolling, children laughing, bubbles popping, cats meowing, underwater gurgling… and cooking shows.

Keywords/ideas from today’s sketch: outside, nature, generative design, places for people to come together, unexpected but welcome, play

Day 4: What are give questions that I’ll investigate with my thesis project?

  1. Can you create a shared new language on the fly?

  2. What is lost in translation? What is gained?

  3. From where does meaning arise—in the understanding or in the effort of trying?

  4. What is the least number of steps necessary to learn to express yourself with a new tool?

  5. How do you create an environment that invites curiosity and welcomes sustained, playful engagement?

Day 5: What are three possible venues for this work to be shown? Why do I want to be part of these institutions or venues?

  1. Online - so that anyone, regardless of their physical location, can participate; also it makes implementation in the locations below much easier.

  2. Public squares or parks - because this could be the best unexpected happening on your routine commute home in the spring. (Hey now, I can request a special event permit for a city park!)

  3. Traditional gallery spaces - because it's even more fun to be raucous in places usually reserved for quiet contemplation.

Day 6: Three experts or types of people (not fictional nor deceased) to speak to about my thesis.
Hmmm…it feels that I’ve hit an impasse in my thinking about this project. Cannot provide a list of possible experts without a concrete direction. Will return here when that resolves. 🤞

Week 2: Experiment 1 Playtesting

Screen Shot 2019-02-08 at 10.11.25 PM.png

Good news! I was on the right track with my code last week; turns out that I didn’t need additional Tone.js functionality to achieve my goal of playing individual synth tones without looping. Luisa explained to me that Tone.js is well-suited, among many things, for looping sequences, and she suggested that I just use JavaScript’s setTimeout() method instead. So simple, I love it. Thank you, Luisa!

I connected all the pieces and built out two chat rooms to playtest on the floor. Both rooms use the structure of my Chat Woids project from last semester’s Computational Typography class, and offer a very plain interface:

Screen+Shot+2019-02-08+at+10.45.28+PM.jpg

Playtest #01
The first chat room counts the number of words (or characters separated by spaces) and for each word plays a tone at a random frequency for a random duration. Due to the structure of my socket server, the random tone that the typer hears is a different from the random tone that another connected user hears. This was an afterthought, but I decided to go with it anyway in the interest of time. After connected participants press enter their words disappear and the tones start playing. Their screens remains blank. While it’s still deployed, try it here!

Playtest #02
This next iteration sonifies words in the same way, except that this time typers’ words fly across in the screen in a flocking formation (visuals only demo here). While it’s still deployed, try it here!

Observations & Notes
I tested both sketches with two pairs of people on the floor. I brought two people close to one another and directed each of them to my server on their laptops. Then I told them to begin. That was it. No other instructions. Afterwards I asked, “What did you notice?” and “What did you think of?” Here are some rough notes from our followup conversations:

In both cases the expectation of a communication device was clear.

There was reporting of an initial curiosity and focus on how words were translated (the underlying structure). Different theories were tested. Was it sentiment analysis? Were their similarities in the sounds that might lend themselves to some shared word use? Was it related to the number or kind of letters? Was it related to the number of spaces? Or the number of words?

One person noticed and was thrown off that the resulting sounds were different on the different computers.

In both instances there was a immediate desire to communicate with their partner; they wanted their partner to understand them. Sometimes that meant pausing and allowing time for the person to respond.

One person reported that when they realized their words were not appearing, they felt tempted to share and reveal more than they normally would, but then held back in case there was going to be a “reveal” at some point and also out of concern that my server was recording their input.

This freedom of anonymity paired with a worry of surveillance was mentioned in both groups.

It didn’t take long for participants to realize that the futility of decoding their word sounds—less than a minute maybe? Some kept typing at a normal text-chatting pace, while others focused on making as many sounds as quickly as possible.

With the second iteration, the sound was secondary—again for both groups. There was confusion or amusement to the upside down and flying words. One group reported preferring Playtest #01 to #02; the other group decided that their preference would depend on the context (and our conversation shifted before we could elaborate on this).

Both groups chatted in tones for several minutes without interruption—which I did eventually out of respect for their time and our surrounding neighbors.

Takeaways
Next time let folks “chat” until they get bored. This might mean creating a designated area out of respectful earshot of others.

I realized that I (maybe a collective we) often take for granted that we’ll be (mostly) understood by another in conversation. And also that the desire to connect with others is a basic human instinct. So when presented with a communication device that obstructs this instinct, there’s an immediate push to figure out why and how to overcome the obstacle. There’s a tension in this disruption to explore here.

Also, a design question: how to address concerns that my server is recording participants’ inputs?

Screen Shot 2019-02-08 at 10.13.28 PM.png

Week 2: Sending Sensor Data Using MQTT Protocol

I’m excited to follow up Understanding Networks with Device to Database and learn more about the world of IoT by building my own connected devices to capture, send, save, process, and represent data. We’ve been learning about MQTT, a lightweight and secure messaging protocol for efficient use with low-power devices with little RAM, like my Arduino microcontrollers. With MQTT, devices do not connect with one another directly. Instead, data is relayed through a central server known as a broker. Devices send or publish data to a topic on the broker and can also subscribe to topics to retrieve information. Connected devices or web apps can use the data on the broker in a variety of ways, such as (very simply) representing it graphically or using it to remotely trigger changes in physical outputs (such as toggling lights off and on). As long as a devices can publish and subscribe, then they do not have to be compatible nor in sync with one another and nor do they have know other devices’ network addresses or ports. All of this flexibility makes scaling IoT systems relatively easy.

This week we learned how to send data from a temperature and humidity sensor (DHT22) connected to a MKR1000 to a broker that Don set up for us on a server created for the class. Building off our exercises in class, I added TCS34725 Color Sensor to my setup. I used this sensor as part of my physical computing midterm project, a Color Sound Pen, which I wrote about here and here. With this sensor’s RGB and clear light sensing elements, I can pick up red, green, and blue as individual values or combine those into one hexadecimal color and also note color temperature or lux ("the perceived brightness of visible light” as noted in the paper linked in my blog post). The sensor performs best when directly touching the object it’s sensing, but I was curious about the ambient values it measures when sitting uncovered in a room. If I can send those values to a server and represent them digitally, it is possible to create a durational color portrait of remote space?

Materials
1 Arduino MKR1000
1 RGB Color Sensor with IR filter and White LED - TCS34725
1 Digital Temperature And Humidity Sensor - DHTT22/AM2302
1 LED
1 220 Ohm Resistor
1 Breadboard
22 & 24-Gauge Solid-Core Wires

Part 1: Adding the TCS34725 Color Sensor
Thank goodness for my blog! It was quick to review the sensor and remember that Adafruit provides two sketches to sense color temperature and lux (tcs34725.ino) and also converted-for—human-perception RGB color information (colorview.ino). Right now, I’m interested in the RGB color values, so I modified and incorporated the code into the TemperatureHumidty sketch.

Here’s a screenshot of my Arduino serial monitor showing the new sensor data:

Screen Shot 2019-02-07 at 7.07.05 AM.png

Part 2: Publishing New Color Data to a MQTT Topic
Using the subscribe example from class, I logged into see the read out of the new sensor data from the broker. Here’s a screenshot:

Screen Shot 2019-02-07 at 7.16.34 AM.png

A different way to visualize the color sensor data would be to display a swatch of color on the screen and perhaps chart the individual corresponding RGB values, which I’ll do if I have time!

Code on GitHub