Tuesday, June 7, 2011

The Third Metric: Clarity

Chewing on the third metric, I'm looking at something that I'm going to call clarity but which I'm happy to find another name for if there are suggestions. it shows up in a few of the different points I mentioned, but the general idea of this: the GM is the stand in for the player's senses. She is their window to the world around them, and a lot of the game is going to depend on how well or poorly she does that.

The problem is that this is something that's best judged by the players. That's not a bad thing in its own right, but it's problematic for our purposes - we want the GM to be able to self-assess with at least a moderate sense of objectivity, so we need some metric to help the GM tell whether a given session has gone well or poorly, and I don't immediately see a good option.

The solution is to widen the net a little bit, and think about what we're looking at in general. We're trying to get a sense of how well the GM conveys the world. What does it look like when that fails?

Thinking as a player, this is really easy to point to - it's an undo situation. "Wait, what? I wouldn't have done X if I knew Y!", like "I wouldn't have tried out the window if I knew the ogre was right in front of it!"

Now, GMs handle these situations with differing degrees of grace, and I admit there's a danger that some GMs may not notice these situations, or may misattribute their cause, but calling it out like this hopefully helps any GM looking to rate herself. So let's call this our zero scenario.

What does it look like when the GM rocks at this? That's harder. Like a lot of good GMing, it's success is pretty seamless. The players had all the information they need to engage things, so it all just worked from their perspective. That's our 2 point scenario, but how do we spot it?

My suspicion is that you can tell the difference between a 1 and a 2 by the questions the players ask, specifically whether they ask for explanation versus clarification. Explanation (which usually sounds like "Hold on a second, [QUESTION]?") indicates that your first pass did not create a clear image for the players. Clarification is a question built upon the description - sort of a "tell me more about [THING]".

Admittedly, this is a bit of a cheat. We're indirectly using the players to judge this metric without explicitly asking them for a rating, but as noted at the beginning, they're the best source for this one. That part worries me less than the fact that this one may be a little bit harder for GMs to self-apply - it requires a decent recollection of the way the game went - and that may yet prove to be a real problem. Still, for the moment, I suggest.

0 - Player confusion regarding situation leads to complaints, retcons, arguments.
1 - Lack of clarity requires further explanations for players
2 - Sufficiently clear that questions focus on the situation as presented.

Honestly, this doesn't feel quite as solid as the last two metrics, but I think it's still in bounds. Still, I'm inclined to kick it a bit.

1. I think a key metric for distinguishing between 1 and 2 is "how often did the GM need to repeat or rephrase information?" Which, I think, is pretty close to what you were going for, but is a little more measurable than your explanation/clarification split.

2. I think there's a danger here of false scoring. If a player screws around during a description, that's on the player, not the GM.

Alternately, if a GM and a player talk differently, say a Visual GM ("Look at the map to see where you are.") tries clarifying something to an Aural player ("I see where I am on the map, but can you tell me what I see?"), something will get lost in the translation since they interact with the world differently and are comfortable with very different language. Sure, VARK focuses on learning styles, but the categories deal with communication style, and there are other similar communication categories in a very similar vein. More on VARK here: http://www.vark-learn.com/english/page.asp?p=categories

Ultimately, responsibility for communicating about the game and the game world lies with the GM, so I'll buy Clarity as a metric generally. But there will be special cases and exceptions where the rating system breaks down.

3. If a player screws around during a description, that's on the player, not the GM.

I think it's important to remember that these are not intended to be metrics of "How good/bad am I as a GM". If they were, then we'd have to figure out when a failure was the player's fault, and when it was the GM's fault. I don't think that would be a fruitful conversation.

A better way to think about these metrics are as a way to measure the overall health of the game. Every group is going to have a different baseline. For some groups, zeros and ones will be the norm. For others, it'll be twos across the board. It's not that the zeros & ones group is a "bad" group, and the twos group is a "good" group; they just have different baselines.

We can't answer the "How good/bad am I as a GM" question directly. What we can do is measure overall Game Health over time, and look for things we can do as a GM to help increase the health of the game.

If it's a player behavior that seems to be dragging the score down, that doesn't absolve the GM of responsibility. As a GM, you have dozens of tricks and tools you can try in order to engage and communicate with players. If one thing doesn't work, try something else. Sometimes, you're not going to be able to engage/communicate well with a player, because "that's just the way they are". It happens. Even so, giving up on a player should be a last resort.

4. @Marshall - That's an excellent way to put it, and I think that's going in the final version.

@TW Yeah, that's my concern to. Some disconnect will always happen, but my hope is that while this is subject to difficulties, it's _consistently_ subject to difficulties, so the visual Gm and the Aural player will have a similar score each time, so when it changes, it's worth looking at what was different.

5. Dammit Pol, You manage to sound much smarter saying it!

6. Not really sold on this one, Rob... Even in real life where two friends who know each other are talking about something they both know, there are "wait, we're talking about the food? I thought we were still on the parking" times.

There is definitely meat in how these natural and inevitable miscommunications are handled... I wonder if you will have a "social eptness" measure. Is someone blown off or overmastered, or do they get heard? Is there an irritating and distracting argument?

I'd suggest that a way to get at something similar to your metric here is:

Empowerment:
0 - the PCs don't make plans, or the plans they make have no basis you can see in game reality
1 - the plans the PCs make need some talking over to fit into the world
2 - the PCs delight you with their crazy shit

Though that does presuppose not only that the players know what's happening, but also that there is enough breathing room in the game that they can come up with plot that you didn't provide, and that everyone in the game likes that. This way of enjoying gaming isn't necessarily the only way.

I've been meaning to comment for awhile on this stuff. Quite pleased by the first bit since it exactly aligned with my own system of grading my games: C = everyone got to do something, A/B = everyone got to do something awesome. My other metric is "Did I describe smells and the weather" which for me means both "did I have enough of a grasp on plot and pacing to be able to throw in elaborations" and "did I present the world in an immersive way".

The Rules Mastery metric you suggest is also pretty orthogonal to what I do, since I run a lot of Amber/systemless. I'd lean towards putting it as Prep (Did I know everything that was going to go into the game, plot and NPCs and rules, well enough that it wasn't obvious when I faked it), and/or additionally Focus (Did the group devolve into a bunch of people quibbling about when the Spanish conquered the Incas and how many d6 a Conquistador has).