Face Finder and Teen Angst, Part 2

While the initial creation of these games was difficult, we were lucky to forgo significant revisions in our second round of prototyping. During our recovery from the initial struggles of conceptual development, I learned a valuable lesson about teamwork. But first, let me describe what went wrong.

Face Finder had a complicated design that was difficult for the student designer to keep track of. Consequently, errors were made during data collection. According to our original design, players were supposed to use clue cards, face cards, and character cards to guess the identity of five characters in a murder mystery. Guessing was supposed to take place during each round of play, and players could update their current guess using cards that already lay on the table. However, the student experimenter only allowed students to guess the identity of one of the five characters at a time. Subsequently, players received very specific feedback that enabled them to correctly guess the hidden characters after only a few rounds of play. The specificity of the feedback did not leave much room for an ethnic bias to emerge. Fortunately, we found a solution to this problem, which I will describe in the final post about this game.

For Teen Angst, the initial delays associated with finding a topic and designing the game left very little time to actually make the game. The student created content that was appropriate, but it took her a long time to format her questions and put them into PowerPoint slides. She pulled at least one all-nighter to my knowledge. Game design always takes twice as long as you think even if you take this rule into account. Students will always fail to appreciate this statement, even if it’s scrawled on the chalkboard in blood. Because the student was rushing to finish her game, critical errors were made during data collection. We had no time for playtesting, and data collection was conducted using the first playable version of the game. There were serious omissions on the data sheet used to collect player responses. These omissions made it difficult to determine how player responses corresponded to the questions in the game. Again, we got lucky and found a solution to this problem that allowed us to analyze the data. You might think all these problems could have been avoided if the student was working harder or if I pushed her more. However, she was working to the best of her ability, and I was constantly urging her forward. So, what went wrong?

In this case, it was a mistake to resist the natural flow of energy in the room. These two students were friends, and they were constantly interacting with each other during the development process. Rather than separating them in an authoritarian attempt to control the situation, it would have been a better to let the students to work together on one project. I predict that they would have had more fun, the project wouldn’t have been delayed, and fewer errors would have been made.

In the past, I’ve had success when everyone in the lab works on the same project. The quality of the final product was better, but some of the students didn’t participate as much as they could have. Furthermore, there are always students who are happy to reside in the background and let others take the lead. This time, I assigned students to individual projects to help them reach their maximum potential. I hoped that each student would contribute to every project in the lab during the group brainstorming sessions, and I hoped each student would learn independence by managing their own project. Unfortunately, the students ended up focusing too much on their own projects and collaboration was limited. As a result, errors were made because there was no quality control built into the design process. In a typical lab, there is a chain of command that insures quality control. Principal Investigators manage postdoctoral fellows, postdoctoral fellows manage graduate students, and graduate students manage undergrads. However, when a PI is managing a group of undergrads, gaps in the normal chain of command result in too many details for the PI to keep track of. I now believe that pairing students up on a project is a good idea to minimize errors, build camaraderie, and help students achieve milestones during development.

Multitasker, Part 2

Multitasker is a board game designed to teach students about the bottleneck that occurs in decision making by allowing them to experience the process firsthand. Players are challenged with up to four physical tasks that must be accomplished simultaneously within a time limit. Initial play testing showed that the game was fun, and students were getting acquainted with the idea that multitasking is more challenging than they originally appreciated.

Of all the games in development during this summer research project, Multitasker has posed the fewest design problems. The core game mechanic is closely wedded to the lesson being learned, which is critical in educational game design. There were only a few adjustments that needed to be made during the second iteration of prototyping, and we are on our way toward collecting pilot data. The student designer developed a new board that served as the centerpiece for the game. Apart from providing players with feedback on their progress in the game, the board was a critical addition because it was the means of controlling flow. Previously, we were using the roll of a four-sided die to determine how many tasks a player would perform simultaneously. The problem was that a player could roll a 4 on their first round of play and be immediately challenged with four tasks. This method was a rather poor way of controlling flow. Now, the player rolls a die to move between four stations on the board. Players move forward or backward on the board depending on whether they succeed or fail on a given task, respectfully. Essentially, the game incorporates a 1-up/1-down psychophysical staircase. An additional task is added when a player makes it to the next station. This way, players have a chance to practice several tasks individually before additional tasks are added. Each player starts from their own station, and the goal of the game is to make it through all four stations. The number of steps between stations is graded so that players have a lot of time to practice individual tasks, but they only have a few trials where they have to do all four tasks at once. Apart from the board, we added an audible egg timer to insure that players could hear when time was running out. And the student designer found travel-sized MagnaDoodle boards that made drawing with one hand possible.

Finally, the student designer had to develop appropriate methods for assessing the efficacy of her game. Before and after attitude surveys toward multitasking were created. We believe these surveys are a valid method of assessment because we are mostly concerned with the students’ attitudes about multitasking rather than their ability to multitask, even though in-game assessment of performance is major advantage of games. Eventually, experimental and control groups will be compared on the survey. We predict that students who play the game will have a more accurate view of multitasking compared to students who learn about multitasking via written text. Survey questions prompted subjects about the likelihood of completing certain tasks simultaneously. Surveys were constructed using the split-half reliability method. Player attitudes were recorded using Likert scales.

The primary lesson to be learned from our experience with Multitasker is to think small. If you manage to find a simple lesson to teach, it is far easier to find an appropriate game mechanic to teach that lesson.

Decision Maker, Part 2

There is a huge sigh of relief that occurs after a team settles on a design and makes their way toward play testing. While play testing might reveal this to be a false sense of security, it’s nice to revel in the moment while it lasts. With a few adjustments, Decision Maker is now in development. Keep in mind that it is possible to build and prototype a board game in a fraction of the time it would take to build a digital game, and thus I’ll be talking about construction and prototyping here.

Decision Maker is designed to train students on making better decisions under uncertain conditions. Players are offered a prospect (e.g., 25% chance of winning $400) on a playing card, and they must decide what sure bet (e.g., $105) they would take in lieu of the prospect. The winner of each round of play is the player who places a sure bet above the utility of the prospect (i.e., $100) but below the sure bets of their competitors. The winner of the round gets to keep the utility (i.e., $100) indicated on each card. However, even if they make a correct decision and beat their competitors, they only get to keep the value of the utility if the gamble actually pays using spinner. If no one wins the round, the card is recycled back into the deck. The overall winner of the game is the player with the most money after the deck of prospects is exhausted. In Decision Maker, a player is constantly trying to maximize their winnings without being too greedy.

We had to make a few changes to the game design, which may affect flow. Because players must all bet on the same prospect, there was no obvious method for controlling flow for individual players. This problem is typical in the classroom as well; teachers struggle to accommodate a wide range of abilities. Consequently, we abandoned the staircase procedure, where all prospects would be ranked according to difficulty, and opted for a simple two-level design. In the two-level design, if any single player gets more than three wins in a row, all players start drawing from a challenge deck. The prospects in the challenge deck are more difficult to judge because the probabilities and values are more extreme. If a player misses a question from the challenge deck, all players must go back to using the regular deck, and the player who advanced the group to the challenge deck loses a turn. Once the regular deck is exhausted, all players use the challenge deck. If for some reason, the challenge deck is depleted first, then users default to the regular deck. During the final point tally, winnings from the challenge deck are doubled, so there is an incentive to move to the challenge deck. A player can elect to draw from the challenge deck at any point during the game, but they will lose a turn if the gamble doesn’t pay off. The list of prospects for the regular deck follow: {99% chance of winning $5000; 90% chance of winning $400; 70% chance of winning $150; 55% chance of winning $700; 10% chance of winning $500; 6% chance of winning $2500; 0.2% chance of winning $10000}. Ten variations of these prospects (70 prospects total) will be created using the same percentage but a different value to keep the computation of utility difficult (e.g., 99% chance of winning $4970). The challenge level has 10 variations of the following prospects: {0.99% chance of winning $5000; 0.90% chance of winning $400; 0.70% chance of winning $150; 0.55% chance of winning $700; 0.10% chance of winning $500; 0.6% chance of winning $2500; 0.02% chance of winning $10000}. The combined total number of prospects for both levels is 140. While the elimination of the classic psychophysical staircase procedure will limit our control over flow, the two-level version of the game should provide players with an opportunity to practice before moving on to more difficult challenges.

A second change we made to the game was to eliminate the use of physical feats of strength before advancing to new levels. We originally included physical tasks in the game to break up the monotony of judging the prospects. However, there were two factors that influenced our decision. First, physical feats were never married to the lesson that we were trying to teach. It’s important that every element of the game be designed with learning in mind. Including a fun game-within-game may devalue the reward and joy derived from the primary lesson. In psychology, it is known that secondary reinforcers devalue primary reinforcers. If you starting paying a child to do something they would normally do for free, they may ultimately stop that activity and become focused on the money. Second, including the ability to move between levels by risking the loss of a turn gives the players a choice that involves strategy. Including physical feats would only detract from the strategic aspects of the game.

Additionally, we have not made a final decision on whether to include windfalls and calamity cards in the game. These cards would be biased to benefit loosing players and penalize winning players, restoring balance to the game. Cards might reward or penalize the players with extra turns or money. We will conduct a play test to see how they affect the game, and we might decide to keep them as options for players to include in the decks.

Finally, maintaining this blog has proved surprisingly valuable to the development process for this game. The student designer of this game reads the blog and reacts to my comments quickly and effectively. We have many face-to-face interactions, but the blog seems to help her consolidate the material. If she doesn’t get something, she is not shy about asking for additional explanations, which only intensifies our discourse.

Learning by design