Friday, May 27, 2011

The Other Kind Of Nerdy Stuff

First and foremost, I want to thank everyone for the fantastic feedback I got to yesterday's question. If you commented, thank you. If you haven't, please consider weighing in!

I'm hoping to have a new list next week, but I want to talk about a few points that have come up, both general and specific, that hopefully will illustrate my thinking in this. As with the list itself, none of these are set in stone, and I think there's some value in having them out there.

First and foremost, the final list will be neither perfect nor comprehensive. I'm hoping it will cover a broad swath of things, but exceptions will (must!) exist to its scope. This is a liberating point, since the alternative is almost nightmarish in its complexity, but it also has a more subtle element. The simple fact is that whatever final list I settle on, it will almost certainly be both too long and too short, and it will have the wrong elements. This is just a natural function of trying to impose simplified reporting on a complex system - it's lossy and by definition incorrect. That means that a lot of objections and counter arguments are at least as correct as whatever position I put forward because, ultimately, we're all talking about different ways to be wrong.

This is not an argument for relativism, it just demands different rigor. I will try to make sure I have a good (and well communicated) reason for what I do, but it is entirely reasonable for someone to have different priorities which would suggest a different methodology. That is totally cool, and I'm glad they care enough to have an opinion. I absolutely agree that there are other factors to look at, and that a GM-centric perspective has profound and specific flaws. But that will never not be the case - making a choice to pick a focus is not a rejection of those facts, but a matter of acknowledging it and doing something anyway out of necessity.

Anyway, I mention all this to underscore that I find disagreement intensely useful in this process, but also to say that just because I am not swayed is not me asserting that I disagree with a position, but just that it may not fit with the goal I'm trying to accomplish.

Second, the final list needs to be short, but the route to get there should be long. A long list is not practical, just because any final list needs to be simple enough to keep in mind without excess bookkeeping. However, I want to get to that list by distilling as many ideas and perspectives as I can, in hopes that it will make the final list better.

Third, there are a few criteria for what needs to go on the list, and these are where a lot of wrongness is going to come up. First and foremost, they need to be at least reasonably specific. The goal is not to ask "How many time did you encounter an adrenaline rush in play?" because that sort of specificity is a bookkeeping nightmare. At the same time "did you have fun?" is too broad to be useful (no matter how important it is). To come back to the Apgar score, while it is a measure of the child's health, "How healthy is the child?" is not one of the questions. The purpose of the more specific questions is to build an aggregate approximation of an answer.

This means that picking the questions will be a balancing act. They need to be concrete enough to have an answer that is either mostly objective or, if subjective, not too muddled. That's a challenge, and it's a big part of the fourth point.

Fourth, one of the subtle things about the Apgar score is that each element is rated very simply as 0, 1 or 2. It's a little bit more than a Yes/No question, but still very simple. 0 is notably bad, 2 is notably good, and 1 is in the middle. A compressed scale like this strips an answer of nuance, but it has the advantage of smoothing out a lot of subjectivity by reducing a lot of border cases. Was something notably good? Give it a 2. Was it notably bad? Give it a 0. Otherwise, give it a 1. yes, absolutely, there's a little room for waffling, but nowhere near the kind of problems and complexities that emerge if you were to ask someone to rate the experience from 1 to 10.

This simplicity of rating is another reason why you don't want the questions to be too complex - it doesn't tolerate "But" answers. For example, if the question is "Did you have fun?" and the player thinks "Well, the fight was awesome, but the scene in the market REALLY dragged. Guess I'll call it a 1." then the answer is non-informative. Ideally you want a question for each major "but" that's likely to arise.

Lastly, I am not looking to create any new definitions or models of play. One important thing about this is that even if we end up with a good working list, it will not be definitive. I'm trying to report on actual play, and create categories to simplify that reporting, not to define it or set rules for how these are the 5 things that "make" a GM or whatever. I just want to be able to talk about tools.

Anyway, thank you again for all the feedback. I especially want to call out some of the cool links in the comments to others who have had similar thoughts, including Tim White and some folks at Story Games.


  1. People must have taken Friday off. However, I find myself at my desk with a moment. Certainly from the reaction to yesterday's blog entry, your plan is worth doing. I think a lot of people will have an interest in what you do and what you find out by doing it. I certainly look forward to your futhre thoughts. As a rust and crusty old GM myself, who only recently has been trying to ease back into things, I would appreciate being able to go through your process to try and evaluate which things I am remembering like riding a bike, and which new things this old dog (well, middle aged) needs to learn.

  2. I thought Scott Dunphy's link to Tim White's scoring handout ( ) was pretty much everything you'd need. When you get to the end of your process make sure to come back and compare it against that to see if it's better.

  3. Thank you for this link, Noumenon. I like how the questions center around a central question I ask about any game I play or even watch, "Am I engaged/is this engaging?"

    That said, the Apgar score seems to be measuring something specific, as well: Is this baby healthy? So what about what GMs do is something you want to ultimately measure?

  4. To your fourth point: Think Fudge dice, then translate to numeric.

  5. That said, the Apgar score seems to be measuring something specific, as well: Is this baby healthy? So what about what GMs do is something you want to ultimately measure?

    "Health" isn't really one specific thing -- I forget who I'm quoting from here, but
    health is "not an organ, but a construct that emerges when you sum up good antibodies, lungs, etc. It works because good genes correlate. "Beauty" also sums up many sexual ornaments. Intelligence sums up memory, language ability, social perceptiveness, speed at learning practical skills and musical aptitude."

    So Apgar is not measuring something specific either.

  6. That is, in fact, exactly the point. Healthy (or fun) are a broad collection of differently weighted variables, too complicated to measure with real precision. However, you can ballpark them - the Apgar score doesn't determine that the baby is healthy, but it approximates it, and that's valuable.

    Another comparison might be baseball stats. You can look at a player's stats and get a _general_ sense of how good a player they are, but only a general sense. There are tons of other factors which they can't account for in a specific case, but that doesn't make them useless.

    Acknowledging the imprecision is an important part of approaching this, but it's also a reason a lot of people will be uncomfortable with this kind of approach. And that's _fine_. But if this became about precise definitions (rather than a known approximation) then it would bog down instantly.


Note: Only a member of this blog may post a comment.