Although I understand the importance of mirroring the real-life situation as much as possible, in order to make the game educational and reflective of the real-world, I do wonder if that mindset can be limiting. I.e., the quote: “what the player does in Peacemaker mirrors what real Israeli and Palestinian leaders do as closely as possible” (228) doesn’t seem strictly good to me. While there’s value in replicating the real world, I’m not sure that subject matter experts are always the best resources.
For example there’s the assumption that opening up access for Palestinians also reduces security for Israelis. While that may be the case, has an attempt genuinely been made to actually achieve equity? How do they know what the results of that would look like? More generally, could this limit political imagination? Given that de-colonizing Israel, and reversing the settler state and apartheid, hasn’t been tried I don’t necessarily see how SME’s would be able to accurately predict what would happen. While building empathy for both sides is absolutely valuable, I also think it’s important to consider the message being sent by what’s left out. I’m certainly biased, as someone who thinks Israel is an apartheid settler-colonial state, but it feels like presenting the two actors as essentially equivalent feels problematic to me.
More generally, I think there’s a message implicit in modeling a real world system via a game in the first place. Having deterministic actions and reactions, the choice of objective and modes of interaction, and so on all describe the real world as functioning in a very specific way. Although, of course, this isn’t to say that games should be responsible for fully modeling everything that could happen in the real world, I do think it’s important to remember that the mere act of codifying a system as a game is not neutral.
I thought the ‘learning from learning science’ secretion was really interesting – and seems relevant outside of serious games as well. Following their guidelines for effective instruction seems broadly applicable to tutorials as well, or simply communicating how to interact with a world in general.
Related to that, formally assessing learning seems like an interesting challenge. For example, in the redistricting game behaviors like packing and cracking appeared emergently in some playtests, which is a great sign for how well it modeled gerrymandering in the real world. However, given that neither is explicitly part of the game itself, it seems like it would be difficult to include things like that in the assessments. If you measure it explicitly, would 50% of players learning about it afterwards be successful? Do you go more abstract, to try and capture learning at a high level, but then risk missing the more specific learning that’s happening? This balance between allowing for emergence and still having clear metrics seems challenging. It would also be interesting to follow up even more with these games, and see how it translated to behavior change going forward – which you’d hope would happen, but is presumably hard to measure at scale.
I think the core point, that learning shouldn’t just be an added bonus to a game that otherwise reuses existing mechanics, is very compelling, although it’s clear there’s a lot of additional work to do to really tease out all that that entails.