Week 9: Research & Playtests 6-7

Working with Word Lists & Office Hours with Allison Parrish (Monday, April 1)
I’ve been wondering how create dictionaries of “Goldilocks words”—words that aren’t too easy to slip into conversation on the sly (e.g. an, the, in, but) but are not too obscure either. One idea to determine word difficulty is to use its frequency within a particular corpus.

I got this idea from noodling around with wordsapi.com (a dataset of 350,000 words, of which 18% include a zipf or frequency score) and was later confirmed in my conversation with Clay Shirky. I started pulling random words for different parts of speech along with their zipf, a decimal number to the hundredths between 1-7. My notes are incomplete here, but it seems that I didn’t trust the data for some reason, and I kept getting duplicates. In retrospect, I could have kept track of repeats, but any event, I research word frequencies and discovered the work of Mark Davies, a linguistics professor at Brigham Young University. His projects include Word Frequency Data, from which I retrieved a clean and robust word sampling with frequency and parts of speech data from the Corpus of Contemporary American English (COCA). This contains over 560 million words and is the “largest freely-available corpus of English, and the only large and balanced corpus of American English.” I trusted this data, but it listed word frequencies at a large scale for which I wasn’t sure how handle at 3am, from 21 to 6332195 so I queried the words against Words API to get friendlier numbers. (Allison later taught me how I to calculate the zipf score myself: it’s called math.)

In the end, I created three lists of words with from different ranges of frequencies, each with an equal allotment of nouns, adjectives, verbs, and adverbs. Each word was labeled with one part of speech, which of course is problematic considering that words can be several parts of speech depending on how they are used. I created physical word cards from these lists, which the thought that one day I’d test a point system. For example, easiest words worth two points, some worth three, and the hardest ones worth four. What kind of mechanics are needed when you’re aiming to reach a total point score instead of a total word score?

Allison and I spoked about technical ways to create word lists from texts, and some of her tutorials include:

She also pointed me to an open source word list here and books of parlor games from the 1800s! Many of the word games in those reminded me of mid-20th century games that I found on boardgamegeek.com during some of my early research, many of which involve some form of deciphering hidden words from rhyming clues or charades play.

I also ran two very different playtests this week! Here are the highlights:

Playtest 6 • Tuesday, April 2 (ITP Quick & Dirty Show)
The Quick & Dirty Show provided an opportunity to test out a few things:

  1. Parts of speech: are there some that are easier to detect / sneak than others?

  2. A new introduction script with more precise language and organization

  3. A new mechanic when guessing someone else’s word: if you’re right you take the card-in-hand of the other person and add it to your pile; if you’re wrong, they take your current card-in-hand into their stack of points.

  4. Slogans and login design: which resonates more?

In total, seven different groups of people played for nearly 2.5 hours straight. The included a mix of current ITP students (1st-years, 2nd-years, and residents), friends of current students, and prospective students. I personally knew about half of the people who played. Of course the context of the event is to test work, so folks who sat at the table did so ready to play. It was a blast! The introduction and the guessing word mechanics felt right—much fewer questions overall compared to past playtest sessions. International students suggested it would be a fun way to practice English (this is a repeating theme). My unscientific assessment was that adjectives and adverbs are too easy. “Sincere Competitive Chitchat” seems to be winner.

Playtest 7 • Tuesday, April 3 (ITP Feedback Collective)
Early on in this process, Greg Trefry suggested that I play the game in a variety of groups, even with folks who don’t really want to play. My sense is that I checked this box by forcing the game into context of the only formal crit group at ITP. The atmosphere was completely different from the festiveness of the night before. In comparison it was eerily quiet and only three people played across wide classroom tables while others looked on—which added an off-putting performance vibe into the mix. Attendees included one professor, one resident, four 1st-year students, and myself. It was useful, however, because it helped me see a recurring theme when people who do not know each other personally well, and who are not seeking to play the game, are wrangled into it: the conversations invariably lag and there’s feedback to include a timer to pressure people into speaking. I also tested something new and presented a choice of themed words: the most-searched Shakespeare keywords (source), keywords from A Brief History in Time (The Foreword to Chapter 6) (source), and keywords from the two most recent State of the Union Addresses (source). Players chose Stephen Hawking. This throws an extra layer of meta into the game: 1) keeping up with the conversation, 2) planning how to insert your word, and now 3) considering the context of the words’ theme to help you catch them.

Going Forward, some notes for the next three weeks:

  • Pick a target audience: This game has a different feel and might need different mechanics for different contexts (an ice-breaker for people who just met, a parlor game for friends and family, or language fluency practice with vocab words—an inkling that I’ve had since the midterm presentation. I probably need to focus on just one group for the remainder of this semester: let’s do friends!

  • Schedule events: I need to plan rendezvous with my friends.

  • Documentation: …and start filming said rendezvous. Since the final thesis assessment is a presentation, it will be imperative that I show the game in action in order to explain well it to the audience. I’ve ordered a shotgun mic for my smartphone and additional filming accessories for a lightweight yet quality video recording rig.

  • A digital version? Ideally, I’d like to code a digital version to test out a variety of dictionaries as it takes so much time to make word cards) so I’ve started coding a possibility. I feel good about the mechanics of the current paper prototype, but I’m not sure how to translate them into a web app exactly. This will be the focus up the upcoming week.

Explaining the game to Seb at the Quick & Dirty Show

Explaining the game to Seb at the Quick & Dirty Show

Week 8: Research & Playtests 4-5


This week I used my new Playtest Schedule to prep and evaluate feedback and observations for two different playtesting sessions. Now that I have this planning document, I won’t use my blog as a dumping ground for everything little thing. Instead I’ll post weekly progress updates and highlights.

Ahead of my playtests, I chatted with Clay Shirky about the game. A few of my notes from our conversation:

  • It’s a good sign if players are making up their own rules during gameplay.

  • Aim to present a viable demo for the final thesis presentation.

  • Check out the improv storytelling game, The Extraordinary Adventures of Baron Munchausen, which according to boardgamegeek.com, “requires players to sit around telling fantastic (but completely true!) stories” with opportunities for other players to interject and botch the tales. There are specific ways to interrupt and choices for the storytellers to make, but in the end, the tellers of the best stories win. Here’s a video of gameplay.

  • Seek out players of card games that involve bluffing, like poker; he suggested a poker player to contact. (This was confirming, because this was on my original to do list!)

  • Consider building my own custom dictionaries instead of making in-game API calls.

    • If I create thematic lists—e.g. from a move or television show, consider grabbing language used on fan sites.

    • Consider crafting lists by reading level.

  • Consider assigning points to words, e.g. using zipf frequency score = # of points for that word. (More on zipf from the words api that I’ve started using, conveniently called wordsapi.com)

  • Regarding the rules: if I’m wrong in make a guess, is it better that I lose points or help another player? (what about both?)

Playtest 4 • Wednesday, Mar 27
First time playtesting in earnest with ITP-mates, all students and one professor, on the floor. Three rounds with the paper prototype deck and all with players who identify as men. Second rounds were happily played by each group, and I got better at introducing the rules as we went which not surprisingly led to less confusion during the games. Solidified my understanding of how to gain or lose points when calling Gotcha!, but in general, I need to refine my explanation of this to new players. One peer introduced me to the Kickstarter campaign for Throw Throw Burrito (yes! I’m a happy backer now), which reinforced the goal of showing over telling players (and audiences) how to play the game. Notable feedback from each group:

  • Group 1: Interrogation version: what if each player has a chance to spin a tale using as many “secret” words as possible until they get caught by the other players? (This is similar to one of the games I research a while back although the name escapes me right now.)

  • Groups 2: Conversations ran flat, but they decided as a group to choose a new conversation starters. Requested a timer and also the option to skip words.

  • Group 3: Also chose new topics. Proposed a center phone facing up displaying question and timer, with the ability to select the next topic.

Playtest 5 •Thursday, Mar 28
High school students, sometimes mixed with adults, at the Game Center with the same paper deck. Everyone enjoyed themselves, played second rounds of their own volition, made up rules to suite their gameplay, and provided useful feedback, including playing into previously unconsidered situations. Still need to refine the rules for Gotcha! calls…and also make people say Gotcha! during the game (both groups suggested this).

Going Forward, some notes for next time:

  • I realized that need to be more precise with my language in introducing the game—not only with my word choice but also how in the organization.

  • The rules around guessing and the resulting consequences seems confusing: next time guessers will take a word from another player if they are right or give their word to them if they are wrong.

  • I’m going to shuffle the questions cards instead of breaking them out into distinct piles. What happens if a group if faced with less choice when drawing a topic?

  • Based on this week’s observations, a new rule: only call once on person for their speaking turn. If two or more people call Gotcha! on the same person at the same time, the accused gets to decide who to answer.

  • Also, make people call out Gotcha! when making a guess on a word

Week 8: Node-RED for IoT

Use Node-RED to work with IoT in the browser with very little code! For this week's exercise, I created a HTTP endpoint that displays the red, green, and blue readings from my sensor.

With Node-RED’s drag-and-drop nodes, I subscribed to my red, green, and blue MQTT topics, converted those readings to integers, joined them into an array, and printed that to a log file. Then, I implemented a GET request to display the array on an HTML page.

Ideally I’d figure out how to incorporate that data into CSS code to update the background color of the page. For now, I’ve posted the current setup below:

Click to enlarge