I want to present a quick illustration of "performance measurement" using diagrams and questions. For a more authoritative treatment of this topic, which compares and contrasts ongoing performance measurement and periodic program evaluation, see Performance Measurement and Evaluation: Definitions and Relationships. May 2011, US Government Accountability Office (GAO) GAO-11-646SP.
Our context is the "program," which according to GAO-11-646SP, is "any activity, project, function, or policy that has an identifiable purpose or set of objectives."
So, think "program." OK? Let's begin.
Ultimately, we'll address the full question - What is performance measurement? - at the end, but first, let's break things down; and remember, I'm looking at this as an applied anthropologist and evaluation consultant, so expect an eclectic approach. I automatically combine technical and cultural considerations.
Let's unpack what we mean by performance first because, in addition to a few whats, there are a few whos, whens, hows, under what conditions, so whats, etc., all of which will have significant implications when we attempt to answer - What do we mean by measurement?
Resources are performing; of course, they don't up and convert themselves into results, they need to be managed properly by resource people, but they certainly have value in and of themselves, and, in the case of "money," the benefactors who invest this kind of resource expect a return on their investment. But not all resources are furnished by benefactors expecting a return. Take for example certain forms of social or cultural capital. Let's say a community has been successful for the past 10 years and has chalked up a lot of accomplishments. That momentum is an intangible resource owned by nobody in particular, yet it can be capitalized upon or squandered by a particular program.
We typically tend to focus on the "performance" of those people or groups who are targeted by the program, the clients, customers, beneficiaries, and so forth.
The "targets" are the ones being served, depicted here as though they were customers at a restaurant. Sometimes the intended "target" of a program is an environmental condition, e.g., water quality. But individuals and groups are usually behind these conditions, and programming usually aims to change their knowledge, skills, and attitudes so that these conditions can improve. The performance of the people being served has traditionally been of the utmost importance. Programs are usually all about creating certain effects in terms of changing individual and social behaviors. Our baseline studies explore existing (usually undesirable) performance levels and programs are designed to bring about potential (usually desirable) performance levels. [Sneak peak: any time any targeted individual or group moves from existing to potential levels of performance, the program should, I would hope, be able to credit itself for such a development...well, as long as it's documented!]
Notice I seated "information users" at the table. This is hardly ever done. The folks who expect progress reports, the people who might like a report but don't even know how much they'd like one, even the program people buried deep within the program itself, all of these people stand to gain something from periodic updates, insightful stories, early warnings, and so forth. They have information needs that can be addressed or ignored; if they're addressed, then they can be addressed well (or not). Who's responsibility is it to identify their information needs? Who's going to find out how well these needs are met? Who's going to make sure that their needs are honored by the very same program that will otherwise be all about serving the targeted needs of those being served?
Now's a good time to introduce another performer"
Ah ha! The strategy itself as "performer." Some strategies are better than others, even before they are competently or incompetently executed. Now, we're also talking about the performance of those who design the strategies. And since the dramatic inter-relatedness of all these performers must be acknowledged once and for all, let's add the people who do the heavy lifting, without whom not a single resource could be converted into programs that touch the lives of those needing or wanting to change things around: the direct, indirect, and collaborating service providers!
OK, now we have the whole cast of performers on the stage. I've left the best for last. The folks who do the heavy lifting are the stars. They're usually the unsung heroes. Typically over-worked and under-resourced, their "efforts" are the most critical link in the entire chain. How much they do and how well they do it will determine the outcome of the program. The effectiveness itself of a program directly hinges upon the efficiencies and equities associated with how much they do and how well they do it.
There is usually a constellation, a network of direct and indirect service providers, depicted as wait staff at a restaurant in the above illustration. Perhaps the hardest part of doing the heavy lifting is simply working together or collaborating. I'm going to throw in a slide on Collaboration for your consideration.
If you want more information on this topic, go to A. Himmelman (2002). Himmelman offers a way to look at "working together" (or not) like no other I've seen. Consider what is entailed by his operationalization of collaboration:
Exchanging information, altering activities, sharing resources, and enhancing the capacity of another for mutual benefit and to achieve a common purpose.
[Please do not apply this definition to our elected politicians.]
So, we've taken a closer look at what we mean by performance. We can look at resources and strategies as performers; plus, we can look at those performers dramatizing the EFFORT and those performers personifying the EFFECT.
Let's turn to the second question.
For the longest time we felt comfortable describing only a small spectrum of the "activities" performed by programs. We did this... We did that... Then one day a new ad appeared on the television that prompted us to shift our paradigms.
So we started talking about results and we got all outcomes oriented...until we ran out of money. We haven't really abandoned the search for results; you can still hear people calling for accountability. It's just that there's no money to go out and find it. So, some people have taken the law into their own hands, when it comes to assessing programs, and they've developed their own economical approach to assessment based on intuitive criteria and their own convictions.
I don't recommend that kind of approach.
I'm not sure where we're going to find the money to do this. Maybe there are pre-existing intangible resources out there to draw on. At any rate, here's some questions related to "measurement" and linked to what we've discussed about "performance" to consider.
This first one, we're pretty good at already. There's still a little money left actually to have at least some fun with this one; unfortunately, we've convinced ourselves that this only kind of measurement question to raise.
The problem here is that this is like standing in the Mississippi River Delta when attempting to address a question about the entire Mississippi River Water Shed! There are some serious things to consider upstream! We need tangible and intangible resources to go upstream to answer some additional questions.
OK, so we made it all the way up to Memphis, TN, now we're getting somewhere. What if we're beginning to find out out that the results are not pouring in the way we had expected? Suppose we discover that not enough people being served in the program are, indeed, moving out of undesirable levels of performance and into desirable levels of performance? If some "targets" are succeeding, and we know we're producing at least a certain amount of EFFECT, why can't they all succeed, why can't there be even more EFFECTS? Maybe it has something to do with the way in which the services are being delivered? Maybe it has something to do with the EFFORT? How would we find that out?
Yes, we already talked about this. Is anybody assessing the performance of the service providers? Why didn't we think of that? Even if we had thought of that, how would we have financed that? There's hardly enough money to provide the basic services! How are we supposed do both: provide services and measure ourselves while we're providing those services? There's got to be another reason why we're not racking up the results. There's something wrong with the people we're serving. They're lucky we're doing anything at all for them. I knew this program would fail.
No, wait a minute!
Call everybody into a meeting. Let's look at why we we're doing this is the first place. What called us into social change? We're compassionate people. Our organization is committed to a set of values established many years ago, and we were entrusted by our benefactors to do whatever it takes to make a difference. Let's go back to our Mission Statement, our Vision, our Values, etc.
We're already gathering information to satisfy our external reporting requirements, let's take a look at what we have, and who knows, maybe we find a way to collect a little more information, you know, since we're already doing it. Now we have a better idea what kind of information we actually need, maybe from now on we can add question or two to our survey or do a short focus group over Skype.
These are some questions related to the performance of measurement and the measurement of performance. Do you have any answers? For more information click here.
Our context is the "program," which according to GAO-11-646SP, is "any activity, project, function, or policy that has an identifiable purpose or set of objectives."
So, think "program." OK? Let's begin.
Ultimately, we'll address the full question - What is performance measurement? - at the end, but first, let's break things down; and remember, I'm looking at this as an applied anthropologist and evaluation consultant, so expect an eclectic approach. I automatically combine technical and cultural considerations.
Let's unpack what we mean by performance first because, in addition to a few whats, there are a few whos, whens, hows, under what conditions, so whats, etc., all of which will have significant implications when we attempt to answer - What do we mean by measurement?
Resources are performing; of course, they don't up and convert themselves into results, they need to be managed properly by resource people, but they certainly have value in and of themselves, and, in the case of "money," the benefactors who invest this kind of resource expect a return on their investment. But not all resources are furnished by benefactors expecting a return. Take for example certain forms of social or cultural capital. Let's say a community has been successful for the past 10 years and has chalked up a lot of accomplishments. That momentum is an intangible resource owned by nobody in particular, yet it can be capitalized upon or squandered by a particular program.
We typically tend to focus on the "performance" of those people or groups who are targeted by the program, the clients, customers, beneficiaries, and so forth.
The "targets" are the ones being served, depicted here as though they were customers at a restaurant. Sometimes the intended "target" of a program is an environmental condition, e.g., water quality. But individuals and groups are usually behind these conditions, and programming usually aims to change their knowledge, skills, and attitudes so that these conditions can improve. The performance of the people being served has traditionally been of the utmost importance. Programs are usually all about creating certain effects in terms of changing individual and social behaviors. Our baseline studies explore existing (usually undesirable) performance levels and programs are designed to bring about potential (usually desirable) performance levels. [Sneak peak: any time any targeted individual or group moves from existing to potential levels of performance, the program should, I would hope, be able to credit itself for such a development...well, as long as it's documented!]
Notice I seated "information users" at the table. This is hardly ever done. The folks who expect progress reports, the people who might like a report but don't even know how much they'd like one, even the program people buried deep within the program itself, all of these people stand to gain something from periodic updates, insightful stories, early warnings, and so forth. They have information needs that can be addressed or ignored; if they're addressed, then they can be addressed well (or not). Who's responsibility is it to identify their information needs? Who's going to find out how well these needs are met? Who's going to make sure that their needs are honored by the very same program that will otherwise be all about serving the targeted needs of those being served?
Now's a good time to introduce another performer"
Ah ha! The strategy itself as "performer." Some strategies are better than others, even before they are competently or incompetently executed. Now, we're also talking about the performance of those who design the strategies. And since the dramatic inter-relatedness of all these performers must be acknowledged once and for all, let's add the people who do the heavy lifting, without whom not a single resource could be converted into programs that touch the lives of those needing or wanting to change things around: the direct, indirect, and collaborating service providers!
OK, now we have the whole cast of performers on the stage. I've left the best for last. The folks who do the heavy lifting are the stars. They're usually the unsung heroes. Typically over-worked and under-resourced, their "efforts" are the most critical link in the entire chain. How much they do and how well they do it will determine the outcome of the program. The effectiveness itself of a program directly hinges upon the efficiencies and equities associated with how much they do and how well they do it.
There is usually a constellation, a network of direct and indirect service providers, depicted as wait staff at a restaurant in the above illustration. Perhaps the hardest part of doing the heavy lifting is simply working together or collaborating. I'm going to throw in a slide on Collaboration for your consideration.
If you want more information on this topic, go to A. Himmelman (2002). Himmelman offers a way to look at "working together" (or not) like no other I've seen. Consider what is entailed by his operationalization of collaboration:
Exchanging information, altering activities, sharing resources, and enhancing the capacity of another for mutual benefit and to achieve a common purpose.
[Please do not apply this definition to our elected politicians.]
So, we've taken a closer look at what we mean by performance. We can look at resources and strategies as performers; plus, we can look at those performers dramatizing the EFFORT and those performers personifying the EFFECT.
Let's turn to the second question.
For the longest time we felt comfortable describing only a small spectrum of the "activities" performed by programs. We did this... We did that... Then one day a new ad appeared on the television that prompted us to shift our paradigms.
Where's the beef?
So we started talking about results and we got all outcomes oriented...until we ran out of money. We haven't really abandoned the search for results; you can still hear people calling for accountability. It's just that there's no money to go out and find it. So, some people have taken the law into their own hands, when it comes to assessing programs, and they've developed their own economical approach to assessment based on intuitive criteria and their own convictions.
I don't recommend that kind of approach.
I'm not sure where we're going to find the money to do this. Maybe there are pre-existing intangible resources out there to draw on. At any rate, here's some questions related to "measurement" and linked to what we've discussed about "performance" to consider.
This first one, we're pretty good at already. There's still a little money left actually to have at least some fun with this one; unfortunately, we've convinced ourselves that this only kind of measurement question to raise.
The problem here is that this is like standing in the Mississippi River Delta when attempting to address a question about the entire Mississippi River Water Shed! There are some serious things to consider upstream! We need tangible and intangible resources to go upstream to answer some additional questions.
OK, so we made it all the way up to Memphis, TN, now we're getting somewhere. What if we're beginning to find out out that the results are not pouring in the way we had expected? Suppose we discover that not enough people being served in the program are, indeed, moving out of undesirable levels of performance and into desirable levels of performance? If some "targets" are succeeding, and we know we're producing at least a certain amount of EFFECT, why can't they all succeed, why can't there be even more EFFECTS? Maybe it has something to do with the way in which the services are being delivered? Maybe it has something to do with the EFFORT? How would we find that out?
Yes, we already talked about this. Is anybody assessing the performance of the service providers? Why didn't we think of that? Even if we had thought of that, how would we have financed that? There's hardly enough money to provide the basic services! How are we supposed do both: provide services and measure ourselves while we're providing those services? There's got to be another reason why we're not racking up the results. There's something wrong with the people we're serving. They're lucky we're doing anything at all for them. I knew this program would fail.
No, wait a minute!
Call everybody into a meeting. Let's look at why we we're doing this is the first place. What called us into social change? We're compassionate people. Our organization is committed to a set of values established many years ago, and we were entrusted by our benefactors to do whatever it takes to make a difference. Let's go back to our Mission Statement, our Vision, our Values, etc.
We're already gathering information to satisfy our external reporting requirements, let's take a look at what we have, and who knows, maybe we find a way to collect a little more information, you know, since we're already doing it. Now we have a better idea what kind of information we actually need, maybe from now on we can add question or two to our survey or do a short focus group over Skype.
These are some questions related to the performance of measurement and the measurement of performance. Do you have any answers? For more information click here.
No comments:
Post a Comment