
Really measuring, to really manage
You make a string of great points, Jon, so I'm just going to take them in the order you make them.
Yes, we manage what we measure, what counts gets counted, etc. Attention is selective and we focus on what's salient. But what are we managing if we're not really measuring anything? Well, then, we're probably just allowing emotions, prejudices, biases, politics, or what have you to drive us, which is of course not what we intend.
So we wind up with flavor of the week trends in accountability and quality improvement with big bursts of energy that fade with no effect until the next whirlwind blows up. And like you say, the main result is the compilation of historical databases gathering dust.
But you're right, there ought to be some way to situate measurement right in the work flow, with information generated and used on the fly. What we need are tools that give us useful information when and where it is needed, not in some report that arrives six months later. That is not to say that the data won't be stored for use in multiple applications. And making sure that the person who has it knows what to do with it is vital, of course.
And this is where we get at the crux of the problem. Most of the numbers we think of as quantities and measures are in actual fact neither. Just because a number is called a measure does not mean that it stands for something that adds up the way numbers do. Real measures are read off calibrated instruments, where some science has gone into determining whether the thing supposed to be measured actually varies in a way that can be mapped on a number line.
You brought up the GIGO principle, and that is exactly what applies here. Unless the things that are counted up actually represent some one thing that consistently varies from more to less across the particulars of the sample counted, the person counting, the time, the place, etc., then all we have is Garbage In and Garbage Out.
Knowing what to measure has to be guided, of course, by people who know the job and its demands, and the process has to fit into the workflow, absolutely. But meeting these requirements is not sufficient to the task of measuring. Knowing what to measure also has to be guided by the basic mathematics of what makes things add up the way numbers do. Without that, we are stuck with numbers that add up just fine (they always do!) but which then fool us into thinking we're dealing with something real, when all we're really doing is chasing our tails.
And then, Jon, you very perceptively speak to the difference between what's possible in principle, and what's usefully achievable in practice. The Danish mathematician, Georg Rasch, inventor of a very special class of probabilistic measurement models, wrote that models are not meant to be true, but to be useful. His work informs computerized measurement methods that put calibrated tools into the hands of end users.
For more information on Rasch's models for unidimensional measurement, including software, full text articles, consultants, meetings, etc. see www.Rasch.org. For a leading example of what measurement based in counts, ratings, tests, surveys, assessments, checklists, rankings, etc looks like, go to www.lexile.com. For my particular take on the relationship between measurement and capital, see www.LivingCapitalMetrics.com.