Face Finder, Epilogue

Face Finder is designed to teach students about their own cognitive biases for ethnicity and gender. Players work to solve a murder mystery by guessing the identity of five characters (Killer, Accomplice, Witness, Bystander, and Victim). Players update their guesses during each round of play, and they cast a final guess after the 40th round. Players base their guesses on clue cards, face cards, and character cards. Only character cards are required to solve the crime, but we predicted that players would choose face cards that demonstrate an in-group bias for benevolent characters (i.e., Victim and Bystander) and an out-of-group bias for malevolent characters (i.e., Killer and Accomplice).

Nine high school students participated in the study, and they reported their ethnicity in alignment with one of the five ethnicities in the game (Caucasian, Asian/Pacific Islander, Indian/Middle Eastern, Hispanic/Latino, and Black). Players were informed that categories were loosely defined and to choose the one they identified with the most. The rules of the game were described in a previous post. Before describing the data, I should mention that there were errors made during data collection. Rather than making a guess for all five characters on each round, players only made a guess for one character. Consequently, feedback for each guess was too specific and players guessed the identity of the characters more quickly than we envisioned. To compensate, we collected data from the second round of play, where players were more likely to select any ethnicity they wanted for the characters.

Data were analyzed relative to a baseline. Because there were five ethnic categories, a player had a 20% chance of selecting their own ethnic category as the Victim or Bystander. There was an 80% chance that they would identify a Killer or Accomplice as out-of-group. Subjects’ guesses were categorized as supporting our prediction or not. The number of guesses that agreed with our prediction (24 out of 36) were counted and compared to the number expected by chance. A non-significant trend was observed for each category (Figure 3), where subjects were making more in-group biases for Bystander and Victim and more out-of-group biases for Killer and Accomplice (Chi-squared, p > 0.10). A significant trend was observed for all guesses summed across categories, and this trend was greater than expected by chance (Figure 4)(Chi-squared, p < 0.05). We observed ethnic biases for judgments of character in our game, and this data was further supported by qualitative reports from the subjects themselves. Players reported making judgments based on the ethnicity of the Face Cards when they were irrelevant to the task.

Despite the errors we made in collecting data, our game proved to be a valid tool for exposing ethnic bias, and it may serve to educate students about cognitive biases in general. Clearly, a replication with more data is required. While the current iteration of this game focused on ethnicity, we have yet to analyze the data pertaining to gender. Also, this game could also be adapted in the future to expose other biases. The student designer recently revealed that she had an interest in exposing biases associated with sexual orientation and gender identity. I’m excited to see if she will develop this variation of the game on her own.

Figure 1

 

Figure 2

 

Multitasker, Epilogue

Multitasker is probably the most fun game we generated over the summer, which is not a surprise considering we hijacked the mechanics from popular board games. The game was created to teach students about the bottleneck that occurs in decision-making. Even though we are capable of processing multiple stimuli at the same time and making multiple simultaneous responses, we can only make one decision at a time. Psychologists call the delay between decisions, the psychological refractory period. We predicted that students’ attitudes toward multitasking would change after playing a game that pushed their decision-making abilities to the limits.

Ten high school students ended up participating in our experiment. They played a game where they could perform up to four tasks simultaneously. The tasks included (1) molding figures with clay, (2) drawing pictures, (3) guessing a mystery word, and (4) performing various physical activities like hopping on one foot. Attitudes were assessed using pre- and post-game surveys about multitasking. Subjects indicated their attitudes using Likert scales ranging between 1 (highly unlikely) and 5 (highly likely). For each survey, ten questions (e.g., “I feel that it is possible to accomplish many things at once”) were included with their opposites (e.g., “I do NOT feel that it is possible to accomplish many things at once”). The twenty counterbalanced questions were used to verify the reliability of the survey. After data collection, the sign of the responses was adjusted so that all responses had a similar polarity (i.e., increased ratings reflected positive attitudes toward multitasking). Data were combined across all subjects for each question (Figure 1).* For most questions, ratings during the post-game survey decreased relative to the pre-game survey (paired t-test, p = 0.004). Data were also combined across subjects and questions (Figure 2). Mean ratings during the post-game survey were lower overall than for the pre-game survey (t-test, p < 0.0001). Error bars reflect 95% confidence intervals.

Our data reinforce the notion that people believe they can multitask, but those attitudes can be changed via experience. Teenagers are particularly vulnerable to negative consequences of divided attention because they are more frequent users of mobile devices, and they typically lack the experience to keep from using those devices in risky situations. The game was the most fun to play because people were physically active. In an academic environment where students are sedentary, the more physical activity your game demands the better.

*Data for one question (7) and its mirror opposite (17) were removed because they were ambiguous. The pattern of statistical results was similar when they were included (all p < 0.05).

Figure 1

 

Figure 2

 

Decision Maker, Epilogue

Decision Maker turned out to be a simple but fun strategy game for 2-4 players. We didn’t make it past the paper prototype because we only had 6 weeks. However, it wouldn’t take much work to publish it as travel game or a digital game. The student designer was faced with difficult lessons to teach, Prospect Theory and decision-making. A scaffolded reading list was essential for getting her up to speed on Prospect Theory. She demonstrated a clear understanding of the topic, developed a good experimental paradigm, created a game mechanic that complimented the lesson, and collected all her data on time. Like many of our student projects, there were errors in data collection that we were lucky to recover from. Nevertheless, the student really proved herself by completing the data analysis, figures, and poster with minimal assistance.

In the game, players made judgments about prospects that were composed of a probability and a value (e.g., 25% chance of winning $400). Players indicated the sure bet (e.g., $110) they would accept in lieu of the prospect. Normally, people behave irrationally when challenged with extreme probabilities or values. They tend to overestimate the utility of prospects with small probabilities and high values. Our game allowed players to practice decision making under these unusual circumstances. We predicted that decision making for extreme prospects would improve with practice.

We averaged the sure bets placed by the subjects and expressed them as proportion relative to the expected utility. If subjects were behaving ideally, the proportion would be close to 1. If they overestimated the utility of the prospect, the proportion would be greater than 1. We compared data between two sessions of game play. There was little difference between these sessions when players were judging probabilities within a “normal” range (6 to 99%). When players first started placing bets on extreme prospects (0.02 to 0.99%), they consistently overestimated the utility of the prospect (Figure 1). However, with practice, they were less likely to overestimate the utility for extreme prospects! Practice had a clear affect on their ability to make accurate decisions (Figure 2). Keep in mind that when subjects were playing the game, they were required to spin a spinner and watch the prospect play out in real time. For extreme probabilities, they might have to get the spinner to land between 0 and 1% several times in a row for the bet to pay off. The physical act of spinning the spinner made the prospect more visceral, which helped students appreciate how unlikely it is to win extreme prospects.

I consider this game a huge success because the student was sufficiently challenged by the material, and she overcame those challenges to produce a fun educational game that had a quantifiable affect on learning outcomes.

Figure 1

 

Figure 2

 

Learning by design