Category Archives: York College Summer Research Program

And So it Begins… Again

keepCalmLast year I documented the development of several games designed by high school students and college undergrads in a six-week program sponsored by the Department of Education. Over the course of the summer, I mentored seven students and we produced seven board games for education or social impact. Even though there were some bumps and bruises along the way, the students made it to the finish line and learned valuable lessons about design, education, behavior, data analysis, public presentation and working independently. All of the high school students have been accepted to excellent colleges, and the undergrads all went on to produce excellent games in my lab during the year.

Women and ethnic minorities are still greatly underrepresented in technical fields, and providing these students with technical challenges before they get to graduate school should provide them with additional opportunities for success.

This year, I plan to make changes to the program based on our experiences, and I’m blogging about the process to assist any educators who might want to implement game-based learning in their classroom. Three of the undergrads in my lab were accepted to the program, and several others will be participating unofficially. In July, high school students will be admitted to the program, and I anticipate that some will be assigned to work with us as well.

The undergrads have already worked with me for a semester. Each student has designed a project and presented the design at the annual conference for the York College Office of Undergraduate Research. I will present these designs individually in future postings but, for now, I’m going to briefly outline the goals for the summer.

All of the undergraduates have expressed an interest in developing digital games, and we are going to try and make this happen during the summer. None of the students has any experience with coding. Thus, anyone who has ever coded knows that this is an ambitious if not impossible project. However, our goal is only to make a digital prototype by the end of summer, and then we will refine the prototype during the following semester.

Most of the games you see produced for education (e.g., gamesforchange.org) are designed and built by professional teams. On a few projects, students participate in design, but there are relatively few projects where undergrads learn how to code by building a game. Digital games offer an excellent opportunity for students to learn valuable 21st century skills. These skills will help students to design experiments, create stimuli, manipulate data, and communicate ideas during their careers. Women and ethnic minorities are still greatly underrepresented in technical fields, and providing these students with technical challenges before they get to graduate school should provide them with additional opportunities for success. Programming for games is also a great way of learning how to code because, compared to some coding environments where the coder is dealing with abstractions, the game environment provides immediate feedback when the code is correct or not.

I’ve developed a streamlined program to get my students on the path to coding as quickly as possible. Experience has taught me that communication, coordination, and feedback are the most critical elements when working with students. Consequently, we are going to use project management software and agile development (a.k.a., SCRUM) to help us meet our goals. Unlike previous projects where we had a more relaxed pace, we will implement a daily scrum with weekly milestones. My first task was to select team management software in the Google Apps store. We have been using the cloud and Google Apps to exchange most of our design documents. I can say right now that I wasn’t very pleased with the offerings. It was difficult to find an app that was suitable for non-profits and education. Many of the apps were available for free, but finding affordable software for more than five users was a challenge. Many offerings were also too complicated for use with small teams. I decided to go with Teambox because the interface was fairly intuitive, and we could add up to five users for free. Recently, I have used a syllabus for my Independent Study students. The syllabus contains readings, program goals, contact information, etc. The syllabus has proved to be very useful but, for the summer, the syllabus will only be used on the first day. After the first day, the team will move communications and planning to Teambox.

Our team will be using the Unity3d game engine (www.Unity3d.com). There are many game engines available, but I’ll just keep the discussion short by stating that Unity is the clear winner for students learning to code and complete real games. The interface is very organized and intuitive. The coding environment is friendly. The tutorials and documentation are superb. The engine reinforces basic concepts of object-oriented programming. And it publishes to several platforms without a hassle.

Another bit of software that I’ve been using for UML is LucidChart (www.lucidchart.com). UML allows programmers to plan their code before they get buried in the syntax, and LucidChart is an excellent, low-cost offering available via Google Apps. Finally, I purchased subscription to Lynda.com for my students ($29/month). Lynda.com offers a wide variety of technical courses that are better than anything you’d find via Codeacademy or in a MOOC. If you don’t want to purchase a monthly subscription, you can get by with purchasing the courses listed below on DVD for around $25 each.

Knowing full well that most of these tasks will be backlogged, here is the cue so far. Let’s see what happens!

PROPOSED SCHEDULE

ToDo      Backlog        Completed

Before Week 1 

  • Establish a Gmail account, input and share your calendar, create a Google Drive
  • Read sections 1 and 2 of the text
  • Complete the Basic Human Subjects Program at https://www.citiprogram.org/ and print your final certificate as a PDF.
  • Play as many games in the gallery at GamesForChange.org

Week 1 – First milestone

  • Accept the invitation to TeamBox via Gmail and view the tutorial (the invitation will be from robertoduncan at the Transformative Games Initiative). Respond to the first task called “Syllabus” in Team Box
  • Read the “Aligning Game Development, the Scientific Method, and Learning Outcomes”
  • Read the SCRUM reference card
  • Download and install Unity3d on your favorite development machine
  • Read the Unity User Manual
  • Complete the “3D Platformer” tutorial available in the asset store.
  • Get acquainted with the Unity Component Reference library
  • Get acquainted with the Unity Scripting Reference Library
  • Complete the Foundations of Programing: Fundamentals on Lynda.com
  • Complete the Foundations of Programing: Object-Oriented Design on Lynda.com
  • Learn about the State Machine Pattern (http://unitygems.com/fsm1/)
  • Complete the Mechanim tutorial to finish your orientation with the Unity interface.
  • Go through the Scripting Overview at http://docs.unity3d.com/Documentation/ScriptReference/index.html

 

 

 

Teen Angst, Epilogue

Figure 1

This is the last in series of posts about rapid prototyping for game development with high school students. I will use one of our games, Teen Angst, as a case study for what to do if things don’t go according to plan.

Teen Angst had the broadest scope of any of our games. The plan was to use game mechanics to shape decision making about three topics that most interested teenagers: relationships, substance abuse, and nutrition. Players answered a series of questions based on scenarios that were presented via PowerPoint (Figure 1). Points for four resources (Health, IQ, Friends, and Money) were gained or lost depending on the most likely consequences of the decision.

There were several design flaws in the game. First, there was no means of controlling flow. Task difficulty was not adjusted based on performance, which typically results in either boredom or frustration for the player. Second, we adopted a linear narrative that limited choices to only a few options. While the psychology literature indicates that too many choices can paralyze a person with indecision, the game world is full of examples where presenting players with more choices increases their engagement with and enjoyment of the game. In the game world, having a choice means the player is in control of the outcome, and thus they are more likely to engage with the system. As educators we should capitalize on this phenomenon and reconcile it with what we already know – ownership of the learning experience is critical to learning outcomes. Third, the game was unbalanced. In a perfectly balanced game, all the probable outcomes have an equal likelihood of occurring. A perfectly balanced game (e.g., “tic-tac-toe” or “rock-paper-scissors”) is also called a zero-sum game because opponents have an equal opportunity to win. The reward-punishment contingencies in a game, the player resources, or other factors that affect the final outcome can also be out of balance. In our game, the reward-punishment contingencies were not evenly distributed across resources. Even though the number of questions pertaining to the topics and resources were balanced, the point allocation for each topic-resource combination was not balanced. For example, players had more opportunities to gather resources for Health than the other resources.

We were aware of all of these design issues going into data collection, but we didn’t realize they would have such a strong impact on the data. Six high school students participated in the experiment, but the data for one was removed because the instructions weren’t followed properly. Subjects played the game and provided responses on answer sheets. As I mentioned in a previous post, there was an error in creating the answer sheets, which made it difficult to relate individual answers to their corresponding questions. However, we were able to compare the points earned from the first half of the game to points from the second half. The prediction was that players would earn more points in the second half because of practice effects.

Interestingly, performance exceeded chance levels during the first half of the game (Figure 3), which suggests that subjects were attentive and understood the rules. Nevertheless, contrary to our expectations, performance decreased during the second half of the game (Figure 2). Data were combined across all subjects and all categories (i.e., Relationships, Drugs, and Nutrition). For each resource (i.e., Health, Relationships, IQ and Money), performance during the second half of the game was worse than for the first half of the game, c2 (1, N = 157) = 80.067, p < 0.0001). Similarly, performance during the second half of the game was worse when data were combined across resources (Figure 3), c2 (1, N = 157) = 25.28, p < 0.0001). This decrease in performance might be attributed to fatigue. However, it is more likely that this effect is the result of an imperfect game design. The data were consistent with post-game interviews where the players reported being bored.

After six weeks of hard work, a result like this could be devastating to a student. At worst, the student might doubt the scientific method and loose interest in science. It is critical to spend as much time with the student as possible to confirm that they understand the value of impartiality, learn from failure, and persist in their quest for truth. I found it useful to recount my personal experiences with failed experiments as well as examples from famous scientists. Shifting the focus to improving the game was also helpful. However, it was particularly interesting to find that the student found some solace in knowing that her results were important because they provided evidence for the lab’s overarching hypothesis, namely, that properly employed game mechanics are useful for education. In her case, an imperfect design resulted in a baseline to which future iterations of the game will be compared. We both learned a lot from each other, and the student is sure to benefit from this experience in the future.

Figure 2

 

Figure 3

Face Finder, Epilogue

Face Finder is designed to teach students about their own cognitive biases for ethnicity and gender. Players work to solve a murder mystery by guessing the identity of five characters (Killer, Accomplice, Witness, Bystander, and Victim). Players update their guesses during each round of play, and they cast a final guess after the 40th round. Players base their guesses on clue cards, face cards, and character cards. Only character cards are required to solve the crime, but we predicted that players would choose face cards that demonstrate an in-group bias for benevolent characters (i.e., Victim and Bystander) and an out-of-group bias for malevolent characters (i.e., Killer and Accomplice).

Nine high school students participated in the study, and they reported their ethnicity in alignment with one of the five ethnicities in the game (Caucasian, Asian/Pacific Islander, Indian/Middle Eastern, Hispanic/Latino, and Black). Players were informed that categories were loosely defined and to choose the one they identified with the most. The rules of the game were described in a previous post. Before describing the data, I should mention that there were errors made during data collection. Rather than making a guess for all five characters on each round, players only made a guess for one character. Consequently, feedback for each guess was too specific and players guessed the identity of the characters more quickly than we envisioned. To compensate, we collected data from the second round of play, where players were more likely to select any ethnicity they wanted for the characters.

Data were analyzed relative to a baseline. Because there were five ethnic categories, a player had a 20% chance of selecting their own ethnic category as the Victim or Bystander. There was an 80% chance that they would identify a Killer or Accomplice as out-of-group. Subjects’ guesses were categorized as supporting our prediction or not. The number of guesses that agreed with our prediction (24 out of 36) were counted and compared to the number expected by chance. A non-significant trend was observed for each category (Figure 3), where subjects were making more in-group biases for Bystander and Victim and more out-of-group biases for Killer and Accomplice (Chi-squared, p > 0.10). A significant trend was observed for all guesses summed across categories, and this trend was greater than expected by chance (Figure 4)(Chi-squared, p < 0.05). We observed ethnic biases for judgments of character in our game, and this data was further supported by qualitative reports from the subjects themselves. Players reported making judgments based on the ethnicity of the Face Cards when they were irrelevant to the task.

Despite the errors we made in collecting data, our game proved to be a valid tool for exposing ethnic bias, and it may serve to educate students about cognitive biases in general. Clearly, a replication with more data is required. While the current iteration of this game focused on ethnicity, we have yet to analyze the data pertaining to gender. Also, this game could also be adapted in the future to expose other biases. The student designer recently revealed that she had an interest in exposing biases associated with sexual orientation and gender identity. I’m excited to see if she will develop this variation of the game on her own.

Figure 1

 

Figure 2