Skip to main content

Main Challenges Assessment In Serious Games Is Facing


Via: Gaming LabWhat About Assessment?

Gaming Lab has posted earlier today a great article on the main challenges that assessment in Serious Games is facing, under the Adaptive Games tag.

The post section Assessment information could be used to re-generate ”try again” game scenarios, has reminded a lot Noah Fahlstein’s talk at the 2010 Serious Games Summit on Open Possibility Set Games, which I quote:

“Most every computer game arbitrates outcomes based on a finite set of rules and possibility sets. Many activities and problems in the real world require, at times, more complexity, and open possibilities. What would computer games look like if they had to account for a much larger universe of player action and reaction?”

The point in question is that assessment information can become valuable not only per se, but also to improve game-play, both while playing and in future interactions.

Open Possibility Set Games would then account for the above mentioned much larger myriad of player action and reaction, re-generating game scenarios adapted and focused on what the players failed or easily succeeded on the previous session.

More than a matter of measuring player performance, it would involve interpreting how the player is creating learning value.

Here is “What About Assessment?” transcript

In-game player performance assessment is especially important for Serious Games, but it has seldom been considered in academic research in games and simulations.

In particular, there is no work on combining game adaptivity with assessment. However, some results point to interesting challenges that indicate a promising role for game adaptivity in assessment.

Chen and Michael have already identified the main challenges that assessment in Serious Games is facing.

“The mere criterion of successfully completing the game falls short on a number of fronts. Besides the possibility of students cheating or exploiting holes in the system (a time-honored tradition in video games, but considered in a less positive light in classroom settings), it's important to know whether the student learned the material in the game, or just learned the game and how to beat it.”

Most traditional methods for assessment are not accurate enough for Serious Games, since they are inspired by the simple feedback mechanisms used in their entertainment counterparts. Identifying and reflecting on mistakes and decisions is especially important when considering Serious Games.

So far, research in assessment for Serious Games has been mainly centered on After Action Review (AAR) methods. However, results already demonstrate that the direction identified by Chen contains a lot of potential. AAR systems for military simulations are already being used in innovative ways they were not designed to, not only for assessing past behavior, but especially for planning new future training exercises. In these systems, real-time in-game and AAR assessment information establish an emergent domain culture that could allow the co-creation of future game scenarios. Assessment information could be better explored and even incorporated to potentially influence content in Serious Games.

There is typically a lot of valuable information in game logs and emerging from AAR sessions, in Serious Games. This information is far from being fully explored by the game itself, to improve game-play. This happens because logs usually offer an enormous amount of unstructured game data that is therefore difficult to interpret and use. Moreover, AAR information emerges to engage communication between trainees and their instructors and it is not incorporated back in the game. Using this information as a source to guide adaptivity seems a promising, unexplored area.

Assessment Information Could Be Used To Re-Generate ”Try Again” Game Scenarios

Assessment information can became valuable not only per se, but also to improve game-play, both while playing and in future interactions.

The challenges ahead indicate multiple research directions on what and how to adapt. On the one hand, assessment information could be used to re-generate ”try again” game scenarios, adapted and focused on what the players failed on the previous session. So, offering a re-generated game scenario could simultaneously allow a better understanding of what went wrong, and better opportunities to succeed. Work on this direction should tackle, for example, customized content creation, e.g. adjusted to better achieve a learning goal. On the other hand, on-line adaptivity can also be influenced by assessment methods. Game scenarios, and even intelligent agents, could adapt to the assessment of how players are deviating from these purposes. More than a matter of measuring player performance, it would involve interpreting how that performance is being achieved. One example would be to adapt the game because the player is succeeding in learning, but in a slow pace (instead of because he is just performing too good or too bad).

As an important note, researching the relations between adaptivity and assessment seems to be limited to the Serious Games domain. We still need plenty of human expert knowledge to make sense of the correct assessment information, either during or after game time.

James Paul Gee on Grading with Games
“Games Essentially Are a Form Of Assessment”




• Schooling to stress the ability to solve problems collaboratively - as opposed to privileging people who know a lot of facts
• Video games put you in worlds where you have to solve problems
• Video games are all about problem-solving and assessment
• Video games do not separate learning from assessment: you don’t learn some stuff and later you’ll take a test; all you do is getting assessed as you try to solve problems;
• Assessment is probably the more painful part of schooling but in a game it is a lot of fun – gameplay constant feedback can be highly encouraging!