Category Archives: York College Summer Research Program

Multitasker, Epilogue

Multitasker is probably the most fun game we generated over the summer, which is not a surprise considering we hijacked the mechanics from popular board games. The game was created to teach students about the bottleneck that occurs in decision-making. Even though we are capable of processing multiple stimuli at the same time and making multiple simultaneous responses, we can only make one decision at a time. Psychologists call the delay between decisions, the psychological refractory period. We predicted that students’ attitudes toward multitasking would change after playing a game that pushed their decision-making abilities to the limits.

Ten high school students ended up participating in our experiment. They played a game where they could perform up to four tasks simultaneously. The tasks included (1) molding figures with clay, (2) drawing pictures, (3) guessing a mystery word, and (4) performing various physical activities like hopping on one foot. Attitudes were assessed using pre- and post-game surveys about multitasking. Subjects indicated their attitudes using Likert scales ranging between 1 (highly unlikely) and 5 (highly likely). For each survey, ten questions (e.g., “I feel that it is possible to accomplish many things at once”) were included with their opposites (e.g., “I do NOT feel that it is possible to accomplish many things at once”). The twenty counterbalanced questions were used to verify the reliability of the survey. After data collection, the sign of the responses was adjusted so that all responses had a similar polarity (i.e., increased ratings reflected positive attitudes toward multitasking). Data were combined across all subjects for each question (Figure 1).* For most questions, ratings during the post-game survey decreased relative to the pre-game survey (paired t-test, p = 0.004). Data were also combined across subjects and questions (Figure 2). Mean ratings during the post-game survey were lower overall than for the pre-game survey (t-test, p < 0.0001). Error bars reflect 95% confidence intervals.

Our data reinforce the notion that people believe they can multitask, but those attitudes can be changed via experience. Teenagers are particularly vulnerable to negative consequences of divided attention because they are more frequent users of mobile devices, and they typically lack the experience to keep from using those devices in risky situations. The game was the most fun to play because people were physically active. In an academic environment where students are sedentary, the more physical activity your game demands the better.

*Data for one question (7) and its mirror opposite (17) were removed because they were ambiguous. The pattern of statistical results was similar when they were included (all p < 0.05).

Figure 1


Figure 2


Decision Maker, Epilogue

Decision Maker turned out to be a simple but fun strategy game for 2-4 players. We didn’t make it past the paper prototype because we only had 6 weeks. However, it wouldn’t take much work to publish it as travel game or a digital game. The student designer was faced with difficult lessons to teach, Prospect Theory and decision-making. A scaffolded reading list was essential for getting her up to speed on Prospect Theory. She demonstrated a clear understanding of the topic, developed a good experimental paradigm, created a game mechanic that complimented the lesson, and collected all her data on time. Like many of our student projects, there were errors in data collection that we were lucky to recover from. Nevertheless, the student really proved herself by completing the data analysis, figures, and poster with minimal assistance.

In the game, players made judgments about prospects that were composed of a probability and a value (e.g., 25% chance of winning $400). Players indicated the sure bet (e.g., $110) they would accept in lieu of the prospect. Normally, people behave irrationally when challenged with extreme probabilities or values. They tend to overestimate the utility of prospects with small probabilities and high values. Our game allowed players to practice decision making under these unusual circumstances. We predicted that decision making for extreme prospects would improve with practice.

We averaged the sure bets placed by the subjects and expressed them as proportion relative to the expected utility. If subjects were behaving ideally, the proportion would be close to 1. If they overestimated the utility of the prospect, the proportion would be greater than 1. We compared data between two sessions of game play. There was little difference between these sessions when players were judging probabilities within a “normal” range (6 to 99%). When players first started placing bets on extreme prospects (0.02 to 0.99%), they consistently overestimated the utility of the prospect (Figure 1). However, with practice, they were less likely to overestimate the utility for extreme prospects! Practice had a clear affect on their ability to make accurate decisions (Figure 2). Keep in mind that when subjects were playing the game, they were required to spin a spinner and watch the prospect play out in real time. For extreme probabilities, they might have to get the spinner to land between 0 and 1% several times in a row for the bet to pay off. The physical act of spinning the spinner made the prospect more visceral, which helped students appreciate how unlikely it is to win extreme prospects.

I consider this game a huge success because the student was sufficiently challenged by the material, and she overcame those challenges to produce a fun educational game that had a quantifiable affect on learning outcomes.

Figure 1


Figure 2


Face Finder and Teen Angst, Part 2

While the initial creation of these games was difficult, we were lucky to forgo significant revisions in our second round of prototyping. During our recovery from the initial struggles of conceptual development, I learned a valuable lesson about teamwork. But first, let me describe what went wrong.

Face Finder had a complicated design that was difficult for the student designer to keep track of. Consequently, errors were made during data collection. According to our original design, players were supposed to use clue cards, face cards, and character cards to guess the identity of five characters in a murder mystery. Guessing was supposed to take place during each round of play, and players could update their current guess using cards that already lay on the table. However, the student experimenter only allowed students to guess the identity of one of the five characters at a time. Subsequently, players received very specific feedback that enabled them to correctly guess the hidden characters after only a few rounds of play. The specificity of the feedback did not leave much room for an ethnic bias to emerge. Fortunately, we found a solution to this problem, which I will describe in the final post about this game.

For Teen Angst, the initial delays associated with finding a topic and designing the game left very little time to actually make the game. The student created content that was appropriate, but it took her a long time to format her questions and put them into PowerPoint slides. She pulled at least one all-nighter to my knowledge. Game design always takes twice as long as you think even if you take this rule into account. Students will always fail to appreciate this statement, even if it’s scrawled on the chalkboard in blood. Because the student was rushing to finish her game, critical errors were made during data collection. We had no time for playtesting, and data collection was conducted using the first playable version of the game. There were serious omissions on the data sheet used to collect player responses. These omissions made it difficult to determine how player responses corresponded to the questions in the game. Again, we got lucky and found a solution to this problem that allowed us to analyze the data. You might think all these problems could have been avoided if the student was working harder or if I pushed her more. However, she was working to the best of her ability, and I was constantly urging her forward. So, what went wrong?

In this case, it was a mistake to resist the natural flow of energy in the room. These two students were friends, and they were constantly interacting with each other during the development process. Rather than separating them in an authoritarian attempt to control the situation, it would have been a better to let the students to work together on one project. I predict that they would have had more fun, the project wouldn’t have been delayed, and fewer errors would have been made.

In the past, I’ve had success when everyone in the lab works on the same project. The quality of the final product was better, but some of the students didn’t participate as much as they could have. Furthermore, there are always students who are happy to reside in the background and let others take the lead. This time, I assigned students to individual projects to help them reach their maximum potential. I hoped that each student would contribute to every project in the lab during the group brainstorming sessions, and I hoped each student would learn independence by managing their own project. Unfortunately, the students ended up focusing too much on their own projects and collaboration was limited. As a result, errors were made because there was no quality control built into the design process. In a typical lab, there is a chain of command that insures quality control. Principal Investigators manage postdoctoral fellows, postdoctoral fellows manage graduate students, and graduate students manage undergrads. However, when a PI is managing a group of undergrads, gaps in the normal chain of command result in too many details for the PI to keep track of. I now believe that pairing students up on a project is a good idea to minimize errors, build camaraderie, and help students achieve milestones during development.