Collective Play

Week 9: Human-Only Playtesting (Part 1)

I'm inclined to incorporate imagery or video of human faces and/or figures into a final project for Collective Play, although at this point I have no idea as to how or for what outcome. However, this kind of imagery is meaningful (especially if it's of yourself), recognizable, and full of expressive potential. So for this week's preparatory exercise to playtest a human-only interaction, I attempted to move towards this general direction and prepared an activity requiring participants to intentionally observe one another and move their bodies the entire time.

Inspiration for my game came from Augusto Boal's, Games for Actor and Non-Actors, the first three parts of his mirrors sequence in particular. I ran the event on four separate occasions with different pairs of peers. Each time, I asked partners to start by facing one another and explained that one person would be the mirror image of the other, imitating with as much accuracy as possible any facial expressions or movements in their partner and all without talking. Following Boal, I suggested that anyone looking in on the activity should not be able to tell who was leading or  following. The goal was not trip each other up, but to see if they could move in sync. That was part 1. In part 2, partners swapped roles such that the mirrors were now leading. Finally in part 3, I instructed participants to perform both roles simultaneously: they were free to move in any which way but they were also to follow their partner. (This last round reminded me of the speaking-one-line-at-the-same-time activity in class a few weeks ago.) After each session, I asked my peers to jot down their feelings and what they noticed during the different stages. 

With this event, I was curious to note how long it took for folks to express boredom, whether they maintained eye contact the entire time, and by leaving it open-ended, what choreography they discovered together, especially during the third round when the leader/follow roles were ambiguous. I hoped to learn more about the emotional dynamics at play from the feedback I collected at the end. Here's what I found: A desire to move on to the next stage, which I interpreted as an expression of boredom, was always expressed by the leader somewhere between the 45-second and 2-minute mark. Overall, my unscientific takeaway from observing and reading the comments was that it was easier for the mirrors to follow along rather than deal with the pressure of continually coming up with new moves (some expressed anxiety about this). But having each person take turns in the roles was useful practice for the final syncing stage, which was perhaps the most challenging (confusing to know who to follow) but the most rewarding, even if they repeated movements from the prior rounds (they could fall back on a previously-created and shared vocabulary). Though I expected partners to continually face each other and stay planted in their same positions throughout, in two of the sessions, I was surprised to see bodies turn and starting moving in all directions through the space. Because of this, eye contact broke (I expected this to be a must to maintain for syncing success), and I noticed folks intentionally trying to make it hard for the other person to follow (which was of course hilariously for all of us). Despite any confusion expressed during or after the game, there were generally smiles, laughter, and a good time shared by all. A HUGE thank you all to all who playtested!

A few keywords:
Leaders - happy, excited, uncertain, nervous, manipulative, bored
Followers - engaged, fun, relaxed, confused (sometimes to flip the movements)
Simultaneous - uncertain, confused, rewarded

For next time, perhaps I'll give players a specific goal or several tasks to accomplish. I'd also like to play around with the timing and pace. What impact might giving a time limit to achieve a particular outcome have on the gameplay?

Week 6: Taking Turns in Creature Consequences

After focusing on paired activities last week, we turned to queuing for this week's assignment. Maria and I teamed up with James and Kai, and since all of us created partner-based painting/drawing apps last week, we found inspiration in the drawing adaptation of the surrealists' game, Exquisite Corpse. Also known as Picture Consequences, players take turns drawing a portion of a person or a fantastic creature on the same piece of paper without the other participants seeing each other's sections until the full drawing is revealed at the very end. We looked to these paper-based examples online here and here, as well as Xavier Barrade's awesome online version, as additional models. 

First, the four of us played in a traditional analog way (with a regular 8.5" by 11" piece of paper), folding the page into four sections--one for the head, then the torso, next the waist to knees, and finally, the knees to feet. I'm kicking myself for not documenting this version because seeing our composite creature at the very end was well worth the wait! We had fun, and it motivated us to pursue a digital rendering of the game which we called Creature Consequences.

Using a classroom's whiteboard walls, we sketched out what the screen might look like and outlined the code behavior for each person's turn. We envisioned a screen divided into four even rectangles, each the width of the window, from top to bottom. For each player's turn, their rectangle would temporarily disappear to expose the canvas beneath and onto which they could draw their assigned piece of the figure. Their marks would be constrained to that portion of the canvas only. Pressing the Return key ends their turn and advances the next section of the drawing to the next player in the queue. The canvas is only activated for players with active turns; other participants may not leave marks on the screen while they wait. When the last player completes the feet area, pressing Return hides all rectangles to display the full image to everyone on their own screens.

We adapted and built on top of Mimi's human auto-complete example for the server-side queuing, and methodically worked through each item in our list (mentioned above), play testing as we went to clarify functionality and root out any bugs. To start we figured out how to draw rectangles as HTML elements over our P5 canvas. Then, we figured out how to link their visibility to the queue position of each player. Next we implemented the drawing feature, being sure to send that information to the server to broadcast to all input screens (normalizing the location of the marks by dividing by device width on the emit and also multiplying the incoming data by device width). After solving how to constrain players to their specific sections of the screen, we played another round and after creating a completely misaligned character, decided to extend that range into the next player's portion so each person would know exactly where to continue the drawing. 

Here are a few screenshots from our initial meeting, as well as some hastily-drawn characters from our play testing:

In our opening conversation as a group, we reviewed our experiences of waiting during our in-class games. Either we were engaged on how the game was playing out (or anxious that our turn was approaching in Zip! Zap! Zoop!) or completely tuned out. Kai suggested that in our online game we include the option for waiting participants to interfere with the drawing player's "pen," such as change the hue, stroke, and opacity (alpha) values. A bit of a last-minute addition, all the slider values are emitted and broadcast from the server in one bundle. So all three waiting players are not only blindly submitting values for the visual output, but they are also entangled with one another for control of their own slider position. Needless to say, at this stage in the project's development, the sliders are a bit jumpy.

Play and remix on Glitch



Week 5: Painting with a Partner

Screen Shot 2018-02-27 at 3.58.27 PM.png

Riffing off of the Ouija class example and our group's previous assignment, Maria and I built a collaborative painting app for our paired activity. We were inspired to create an opportunity for a fluid flow of collective contributions, and using our tool, two partners work together to control one brush and one palette. The brush itself is adapted from my pixel painting app. There's no set goal except to play in the screen sandbox with digital paint. (It doesn’t always have to be competitive to be fun, right?) 

We thought about expression for the input and also for the output. Ideally, we hoped to increase the range for both compared to our experience with the Ouija game. First the output. Like that example, the position of our "brush" renders on the output screen according to the average position of both participants on their home input windows. But unlike the example, we removed the stakes of clearing the screen if partners move too far away from one another. Instead, we incorporated distance to one another as a creative decision: the closer the partners the larger the size of the brush, the farther away the smaller it gets. 

In this scenario thus far, both players contribute the same type of input (their position and distance to one another) for a combined output. We then decided differentiate the inputs through the incorporation of paint color: one painter controls the hue, while the other controls levels of brightnesss, greatly expanding the palette options. Using desktops/laptops, clicking and dragging the mouse across the screen from left to right changes these values. Paint does not start to flow until both players have clicked their mice/trackpads.

Finally, we incorporated accelerometer data from our mobile devices to paint our combined brush strokes in the air. Swiping back and forth across the screen (along the X axis) adjusts hue and brightness.

If just using laptops/desktops, you can play here and/or remix on Glitch. 

Navigate here for the mobile version.

Code also on GitHub.