Wednesday, November 16, 2011

Low Community EQ (Evaluation Quotient) can lead to Sisyphean Madness.

An article in today's Des Moines Register questions whether "state tax credits" have "worked" for Iowa or not.  Here we go again.  In the void created by the absence of real evaluation and/or performance measurement your guess is as good as mine. Not so fast, you say; not all guesses are created equal.  


Hold on. I'm reminded of two things that Michael Q. Patton has said about the complicated and delicate subject of "evaluation."  One addresses its political nature; the other, its religious.


Evaluation's political nature.  Back in the late 1980s, the American Evaluation Association (AEA), staged an essay contest among evaluators to address: What is and is not politics in evaluation, and by what criteria does one judge the difference? Submissions were received, and Robin Turpin's 1989 essay was selected.  One of her assertions illustrated the overarching theme of all entries: "politics has a nasty habit of sneaking into all aspects of evaluation." One submission, an anonymous one that Patton included in Utilization-Focused Evaluation (1997, p. 352), was "uniequivocal;" 


Evaluation is NOT political under the following conditions:
No one cares about the program. 
No one knows about the program. 
No money is at stake. 
No power or authority is at stake. 
And, no one in the program, making decisions about the program, or otherwise
involved in, knowledgeable about, or attached to the program, is sexually active. 

Evaluation's religious nature:  Evaluators chat it up on EVALTALK and several other venues. Back in 2001, when compassionate conservatives were unveiling so-called "faith-based" organizations of one kind or another, evaluators were huddling up to explore unique opportunities and implications connected to these kinds of programs.  Putting things into proper perspective was, again, Michael Q. Patton, (EVALTALK Listserve, 2001), who reminded everyone of the following:


From an evaluation perspective, 
any program is faith-based unless and until it has evaluation evidence of effectiveness.  
By that criterion, most programs have always been and remain essentially faith-based.


I'll leave it at that.  No, wait. I can't resist.  Programs can be "judged" by anyone with an opinion, but they can only be "evaluated" by someone with data










Sunday, September 11, 2011

Believe it or not, "evaluation" isn't the same as "passing judgment." A Response to the Des Moines Register’s Editorial: Start Selling Schools Plan Now (09/11/11)


Think about what the League of Women Voters does, in particular their citizen-education efforts, in relation to helping the public cut through distorted and confusing campaign rhetoric in order to make better informed decisions when it comes to voting.  I liken their efforts to the ways in which Consumer Reports has helped me and others make decisions about purchasing appliances and cars without the “assistance” of advertising or impression-management marketing. Essentially, what the League and Consumer Reports offer is a set of evaluation criteria and a simple framework that can be used to compare apples to apples, oranges to oranges. 

If consumers and voters had a powerful Chamber of Consumption and Public Participation (CCPP), the way businesses have their local, regional, and national Chambers of Commerce, (yea right, the day after hell freezes over) there would be no need for the League of Women Voters or Consumer Reports.  Well funded CCPP educational campaigns might, over time, exercise and develop our technical and socio-cultural decision-making competencies such that the practice of creating and disseminating fallacious, ridiculous, and offensive ads and campaigns might begin to tapper off and the public appetite for better and more use-able information might begin to grow.

Some would argue that the media serves the public the way I am dreaming about the Chamber of Consumption and Public Participation.  I would argue that the public can only dream about the media ever serving its needs in this way given the social scale and cultural depth of the educational challenge. However, the Des Moines Register was right to ask the Branstad administration to share its “plan” for reforming schooling throughout Iowa (see “Start selling schools plan now” 09/11/11 http://www.desmoinesregister.com/article/20110911/OPINION03/309110017/Start-selling-schools-plan-now ).  The public is, in fact, only getting bits and pieces of the plan; but who knows, maybe all they have are “ideas” right now (See “Iowa officals (sic) unveil ideas for education reform” 09/06/11 http://www.desmoinesregister.com/article/20110906/NEWS/110906034/Iowa-officals-unveil-ideas-education-reform).  In the editorial from 09/11/11, the paper rightly states, “There are few specifics right now… The administration does not plan to release a comprehensive plan with details, including costs, for about a month.”  But then the editorial goes on to say, “That makes it difficult for Iowans to fully evaluate and debate what is being proposed (my emphasis).” 

It was at this particular point that my “evaluator” ears went up.  The paper’s assumption here is that, once the administration releases the plan, then Iowans can go to down evaluating it and debating it.  I think it does a disservice to treat evaluation so superficially.  Granted, as an evaluator, I would be professionally hesitant to expect that anybody could just commence to evaluatin’ without some sort of preparation, if not formal training.  I don’t see much evidence that the public is actually accustomed to engaging in an actual evaluation, unless, of course, you redefine evaluation to mean jumping to an opinion-driven conclusion, which, if we could harness its combustibility, we’d be energy independent in a flash!   Evaluation, like research, is an inquiry strategy that draws on the scientific method for its execution; and we know how some people feel about science.

Now, here’s what the Register could have said, “OK, while the administration is finishing its comprehensive, school-reform plan, we have a month to formulate what evaluators call an evaluability assessment.“ (For an overview of Evaluability assessments, see http://tinyurl.com/5rqgo54 )  “We hope that readers will become familiar with and make use of the evaluation criteria and framework that we plan to introduce in the next few weeks.  And once the administration does make the plan available, we encourage you to join us in a Public Evaluability Assessment of the Comprehensive Plan so that we can collectively examine a.) the clarity of its goal, b.) the extent to which it takes into consideration all of the stakeholders’ views, and c.) the intervention strategy itself that promises to make a difference.  We will be working with Paul Longo , who has graciously offered to share his framework, based on the Performance Blueprint (Longo, 2004 & 2002), for conducting an Evaluability Assessment.* We have inserted the following table as an introduction.  In the next few days we will begin by outlining what we already know about the plan. In this way we will become familiar with some of the technical terminology and anticipate the purposes and potential positive consequences of conducting a Public Evaluability Assessment.  The paper is also considering applying the same process and your valuable input to the Capital Crosswords initiative.”  
* Some of you will recognize this framework as the template for formulating a fully articulated strategy.


CONDUCTING AN EVALUABILITY ASSESSMENT

 

Does the PLAN address the following considerations?

 

1.     Identify the desired External (community) and Internal (organizational) Outcomes corresponding to the required “effort” and intended “effect.” 

2.     Identify the direct and indirect Targets of the strategic initiative (i.e., customers, clients, beneficiaries, communities, and of course, the intended consumers and/or users of the performance information that will be generated throughout the execution of this initiative).

3.     Identify the Desired Effects of the strategic initiative on the targets (i.e., gains in knowledge, awareness, attitude, skills, and conditional status) along with Measures of Quantity & Quality to provide evidence of the attainment, internalization, acquisition, or approximation of those effects among the targets.

4.     Develop the work-in-progress articulation of the Strategy to outline how the strategic initiative will facilitate the attainment, internalization, acquisition, or approximation of the desired effects among the targets

5.     Identify the Direct, Indirect, and Collaborating Service Providers who are positioned to and/or capable of bringing about the attainment, internalization, acquisition, or approximation of the desired effects among the targets

6.     Identify how the Direct, Indirect, and Collaborating Service Providers are expected to perform by establishing Measures of Efficiency and Equity in resource management, collaboration, and service delivery, i.e., the “effort,” so as to envision how much service is/how many services are to be provided and how well those services are to be provided.

7.     Set forth some idea of how the Performance Information, generated by Items #3 and #6, will be collected and used, by whom, how frequently, for whose benefit, for what internal and external purposes (e.g., mandatory reporting, voluntary reporting, resource allocation, decision-making, public education, marketing, continuous improvement, and so forth).  There will be an ongoing challenge to explain the relationship(s) between OUTPUTS and OUTCOMES in an evidentiary, valid, and credible fashion.

8.     Some articulation of the Tangible and Intangible Resources needed to execute, maintain, and grow the strategic initiative.


Thursday, September 8, 2011

What is a fully articulated strategy?


Because of my biases, I don't consider a strategy to be fully articulated unless it addresses 8 of the following 9 considerations - and not really in chronological order.  Resources are too finite and precious to be squandered on let's-wait-and-see approachesAs I mentioned in a previous post, the strategy itself is a performance and, therefore, subject to assessment (i.e., measurement); it's articulation is a performance, the person articulating it is a performer.  We cannot refer to a partially articulated strategy as a "good strategy." For a strategy to be accepted as "good," it must tell a plausible and compelling story of how a constellation of moving parts in a culturally-diverse and politically-charged setting will facilitate the managed and documentable conversion of resources into results, promises made into promises kept.

Here's one way of looking at it:



What is a fully articulated STRATEGY?

1.     Identify the desired External (community) and Internal (organizational) Outcomes corresponding to the required “effort” and intended “effect.” 
2.     Identify the direct and indirect Targets of the strategic initiative (i.e., customers, clients, beneficiaries, communities, and of course, the intended consumers and/or users of the performance information that will be generated throughout the execution of this initiative).
3.     Identify the Desired Effects of the strategic initiative on the targets (i.e., gains in knowledge, awareness, attitude, skills, and conditional status) along with Measures of Quantity & Quality to provide evidence of the attainment, internalization, acquisition, or approximation of those effects among the targets.
4.     Develop the work-in-progress articulation of the Strategy to outline how the strategic initiative will facilitate the attainment, internalization, acquisition, or approximation of the desired effects among the targets
5.     Identify the Direct, Indirect, and Collaborating Service Providers who are positioned to and/or capable of bringing about the attainment, internalization, acquisition, or approximation of the desired effects among the targets
6.     Identify how the Direct, Indirect, and Collaborating Service Providers are expected to perform by establishing Measures of Efficiency and Equity in resource management, collaboration, and service delivery, i.e., the “effort,” so as to envision how much service is/how many services are to be provided and how well those services are to be provided.
7.     Set forth some idea of how the Performance Information, generated by Items #3 and #6, will be collected and used, by whom, how frequently, for whose benefit, for what internal and external purposes (e.g., mandatory reporting, voluntary reporting, resource allocation, decision-making, public education, marketing, continuous improvement, and so forth).  There will be an ongoing challenge to explain the relationship(s) between OUTPUTS and OUTCOMES in an evidentiary, valid, and credible fashion.
8.     Some articulation of the Tangible and Intangible Resources needed to execute, maintain, and grow the strategic initiative.
9.     Repeat Steps 1 – 8 in no particular order.

Monday, August 29, 2011

Performance Measurement: The Motion Picture!

What does performance measurement look like? Here's one way of "picturing" how Resources, Strategies, Service Personnel, and Service Beneficiaries interact so as to produce Outputs and Outcomes.  Come on, you can tell your friends you watched a 33-second movie about "logic models" with original, homemade Performance Measurement music!   Of course it's more complicated than this representation, but hey, nobody ever said the "map" was a substitute for actual travel or the "menu," a substitute for a good meal!  Speaking of representations, none of these "models," "maps," "chains," "frames," and so forth, can be eaten!

(Sorry I couldn't make this video any bigger; if you want it larger, click on "view full size," but then it gets a little blurry.)


Now, you want something you can sink your teeth into?  Chew on these links:
1. An Approach to Performance Measurement: Using the Performance Blueprint and Related Ongoing Performance Measurement & Management (OPM&M) Techniques

2. Other related documents

Wednesday, August 24, 2011

Is Accountability in the Eye of the Beholder?


In the previous post I mentioned that, since we have nearly run out of money (and genuine curiosity) on so many fronts, we have practically abandoned rigorous examinations of programs in search of data to document if not the attainment then at least the approximation of the desired outcomes they project.  I added that in some ways some of us are inclined to take the law, as it were, into our own hands, when it comes to assessing programs, by short circuiting the burdensome and rigorous approaches to comprehensive performance measurement, as depicted in in the following illustration of the Performance Blueprint:


in favor of a cheaper and more convenient approach to assessment based on intuitive, implicit criteria fortified by deeply held convictions, approaches that simplify assessment so that any self-deputized assessor can render “it-either-works-or it-doesn’t” judgments, as depicted in the following illustration:

Then I came across this following video clip of presidential candidate Rick Perry and a moderator.  I apologize; I do not know the origin of this video clip. I do not know the setting, the date, or any other significant aspect of the context.  I am trying hard not to use this as a political statement.  I did transcribe the soundtrack, and I have inserted it into the clip for your convenience.



What I would like to point out by examining this video clip is that it helps us to distinguish between two types of accountability, two countervailing types of accountability.

I prefer an approach to accountability that counts on “science” as a resource and therefore, I expect there to be a quantification of community (external) outcome attainment or approximation.  This can be depicted in the following manner:


If you’ve had the chance to view the video clip, you might wonder whether Gov. Perry is approaching accountability from an entirely different paradigm.  We could spend much more time on this analysis, but I would simply argue that Gov. Perry is counting on “religion” or some other related ideological resource more so than on “science.”  The presence of data supporting an assertion of program failure in relation to community (external) outcome attainment or approximation (state-level teen pregnancy rate) proves to be of little or no significance to Gov. Perry, who maintains that abstinence education is still the best approach (at the beginning of the clip), if not at least well worth the expenditure (by the end of the clip) because it is, after all, the most effective formula for those who believe in it and put it into practice.



Comments?

I’ve included the transcript of the video clip below as well.

TRANSCRIPT:
Moderator:  Let me ask you another one (previously elicited questions) here on a different topic, Governor, why does Texas continue with abstinence education programs when they don’t seem to be working? In fact, I think we have the third highest teen pregnancy rate in the country.

Governor: Abstinence works.

Moderator:  But we are the third highest teen pregnancy, we have the third highest teen pregnancy rate among all states in the country. The questioner’s point is, it doesn’t seem to be working….abstinence education…

Governor:  It…works, ah, maybe it’s… maybe it’s the way it’s being taught or they way it’s being applied out there, but the fact of the matter is… it is the best form of…to teach our children. 

Moderator: Can you -  

Governor: And I’m sorry…

Moderator: …give me a statistic suggesting it works?

Governor:  And the point is, if, … if we’re not teaching it, and if we’re not impressing it upon them, then, no; but if, if, if the point is, you know, we’re gonna go stand up here and say, listen, y’all go have sex and go have the ah whatever is going on, and we’ll worry with that, and here are the, here’s the ways to have safe sex. I’m sorry, you can call me old fashioned, if you want, but I’m not going to stand up in front of the people of the State of Texas and say, that’s the way we need to go and forget about abstinence.

Moderator:  That’s not what the ques, and with respect Governor, that’s not what the questioner was asking, the questioner was simply saying, we’re spending money on abstinence education, we’re the third highest teen pregnancy rate in the country. Is there a problem, a disconnect between one and the other?

Governor:  I don’t know. Look, it, it., it gets in line with… ah … it gets in line with  other programs that we have that we spend money on, and do they work one hundred percent or do they work five percent? That’s a bigger and a better issue than, well we have the third highest teenage pregnancy rate. Ah, are we, on the amount of money we are spending, are we getting a return on that that is appropriate?

Governor: I think-

Moderator: And your belief is that we are?

Governor:  I think those are some dollars that are well spend.  For instance, we’re spending dollar to check kids for steroids, right? And what’d we find? Seven? Fifteen? And we spend x numbers of ah, Look I’m not…

Moderator:  You think that was a poor expenditure.

Governor:  I am saying that if, no I’m trying to make a comparable here, if that’s (steroid program) a good expenditure, then I would suggest to you the dollars we’re spending on abstinence education is a good expenditure.

Friday, August 19, 2011

The performance of measurement and the measurement of performance

I want to present a quick illustration of "performance measurement" using diagrams and questions.  For a more authoritative treatment of this topic, which compares and contrasts ongoing performance measurement and periodic program evaluation, see Performance Measurement and Evaluation: Definitions and Relationships. May 2011, US Government Accountability Office (GAO) GAO-11-646SP.

Our context is the "program," which according to GAO-11-646SP, is "any activity, project, function, or policy that has an identifiable purpose or set of objectives." 


So, think "program." OK? Let's begin.
Ultimately, we'll address the full question - What is performance measurement? - at the end, but first, let's break things down; and remember, I'm looking at this as an applied anthropologist and evaluation consultant, so expect an eclectic approach.  I automatically combine technical and cultural considerations.

Let's unpack what we mean by performance first because, in addition to a few whats, there are a few whos, whens, hows, under what conditions, so whats, etc., all of which will have significant implications when we attempt to answer - What do we mean by measurement?

Resources are performing; of course, they don't up and convert themselves into results, they need to be managed properly by resource people, but they certainly have value in and of themselves, and, in the case of "money," the benefactors who invest this kind of resource expect a return on their investment. But not all resources are furnished by benefactors expecting a return.  Take for example certain forms of social or cultural capital.  Let's say a community has been successful for the past 10 years and has chalked up a lot of accomplishments. That momentum is an intangible resource owned by nobody in particular, yet it can be capitalized upon or squandered by a particular program.  

We typically tend to focus on the "performance" of those people or groups who are targeted by the program, the clients, customers, beneficiaries, and so forth.

The "targets" are the ones being served, depicted here as though they were customers at a restaurant.  Sometimes the intended "target" of a program is an environmental condition, e.g., water quality.  But individuals and groups are usually behind these conditions, and programming usually aims to change their knowledge, skills, and attitudes so that these conditions can improve.  The performance of the people being served has traditionally been of the utmost importance.  Programs are usually all about creating certain effects in terms of changing individual and social behaviors. Our baseline studies explore existing (usually undesirable) performance levels and programs are designed to bring about potential (usually desirable) performance levels. [Sneak peak: any time any targeted individual or group moves from existing to potential levels of performance, the program should, I would hope, be able to credit itself for such a development...well, as long as it's documented!]

Notice I seated "information users" at the table.  This is hardly ever done.  The folks who expect progress reports, the people who might like a report but don't even know how much they'd like one, even the program people buried deep within the program itself, all of these people stand to gain something from periodic updates, insightful stories, early warnings, and so forth.  They have information needs that can be addressed or ignored; if they're addressed, then they can be addressed well (or not).  Who's responsibility is it to identify their information needs?  Who's going to find out how well these needs are met?  Who's going to make sure that their needs are honored by the very same program that will otherwise be all about serving the targeted needs of those being served?

Now's a good time to introduce another performer"

Ah ha!  The strategy itself as "performer."  Some strategies are better than others, even before they are competently or incompetently executed.  Now, we're also talking about the performance of those who design the strategies.  And since the dramatic inter-relatedness of all these performers must be acknowledged once and for all, let's add the people who do the heavy lifting, without whom not a single resource could be converted into programs that touch the lives of those needing or wanting to change things around: the direct, indirect, and collaborating service providers!

OK, now we have the whole cast of performers on the stage.  I've left the best for last.  The folks who do the heavy lifting are the stars.  They're usually the unsung heroes. Typically over-worked and under-resourced, their "efforts" are the most critical link in the entire chain.  How much they do and how well they do it will determine the outcome of the program.  The effectiveness itself of a program directly hinges upon the efficiencies and equities associated with how much they do and how well they do it.

There is usually a constellation, a network of direct and indirect service providers, depicted as wait staff at a restaurant in the above illustration.  Perhaps the hardest part of doing the heavy lifting is simply working together or collaborating.  I'm going to throw in a slide on Collaboration for your consideration.

If you want more information on this topic, go to A. Himmelman (2002).  Himmelman offers a way to look at "working together" (or not) like no other I've seen.  Consider what is entailed by his operationalization of collaboration:

Exchanging information, altering activities, sharing resources, and enhancing the capacity of another for mutual benefit and to achieve a common purpose. 

[Please do not apply this definition to our elected politicians.]

So, we've taken a closer look at what we mean by performance.  We can look at resources and strategies as performers; plus, we can look at those performers dramatizing the EFFORT and those performers personifying the EFFECT.

Let's turn to the second question.


For the longest time we felt comfortable describing only a small spectrum of the "activities" performed by programs.  We did this... We did that...  Then one day a new ad appeared on the television that prompted us to shift our paradigms.

Where's the beef?


So we started talking about results and we got all outcomes oriented...until we ran out of money. We haven't really abandoned the search for results; you can still hear people calling for accountability.  It's just that there's no money to go out and find it.  So, some people have taken the law into their own hands, when it comes to assessing programs, and they've developed their own economical approach to assessment based on intuitive criteria and their own convictions.


I don't recommend that kind of approach.

I'm not sure where we're going to find the money to do this. Maybe there are pre-existing intangible resources out there to draw on. At any rate, here's some questions related to "measurement" and linked to what we've discussed about "performance" to consider.

This first one, we're pretty good at already.  There's still a little money left actually to have at least some fun with this one; unfortunately, we've convinced ourselves that this only kind of measurement question to raise.



The problem here is that this is like standing in the Mississippi River Delta when attempting to address a question about the entire Mississippi River Water Shed!  There are some serious things to consider upstream!  We need tangible and intangible resources to go upstream to answer some additional questions.


OK, so we made it all the way up to Memphis, TN, now we're getting somewhere.  What if we're beginning to find out out that the results are not pouring in the way we had expected?  Suppose we discover that not enough people being served in the program are, indeed, moving out of undesirable levels of performance and into desirable levels of performance?  If some "targets" are succeeding, and we know we're producing at least a certain amount of EFFECT, why can't they all succeed, why can't there be even more EFFECTS?  Maybe it has something to do with the way in which the services are being delivered?  Maybe it has something to do with the EFFORT?  How would we find that out?

Yes, we already talked about this.  Is anybody assessing the performance of the service providers?  Why didn't we think of that?  Even if we had thought of that, how would we have financed that?  There's hardly enough money to provide the basic services!  How are we supposed do both: provide services and measure ourselves while we're providing those services?  There's got to be another reason why we're not racking up the results.  There's something wrong with the people we're serving.  They're lucky we're doing anything at all for them.  I knew this program would fail.


No, wait a minute!


Call everybody into a meeting. Let's look at why we we're doing this is the first place.  What called us into social change?  We're compassionate people. Our organization is committed to a set of values established many years ago, and we were entrusted by our benefactors to do whatever it takes to make a difference.  Let's go back to our Mission Statement, our Vision, our Values, etc.


We're already gathering information to satisfy our external reporting requirements, let's take a look at what we have, and who knows, maybe we find a way to collect a little more information, you know, since we're already doing it.  Now we have a better idea what kind of information we actually need, maybe from now on we can add question or two to our survey or do a short focus group over Skype.

These are some questions related to the performance of measurement and the measurement of performance.  Do you have any answers?  For more information click here.

Thursday, August 11, 2011

More on Collaboration, No, Not Moron Collaboration

We need a lot more on collaboration.  How to understand it as a resource, as a strategy, as a part of a strategy, as a measure of how much EFFORT is expended, as a measure of how well this EFFORT is expended, as a targeted, internal, organizational outcome that is likely the most crucial and instrumental vehicle towards effectiveness.  

We don’t need any more illustrations of moron collaboration; I couldn’t resist.


A helpful resource in being able to define (and therefore operationalize and therefore formulate evaluation criteria and eventually measures associated with) collaboration is Arthur Himmelman's COLLABORATION FORA CHANGE (revised 2002). At the root of collaboration is the ability and willingness to "work together," but there's a lot of ways to work together (or not), and Himmelman helps to distinguish between 4 ways of working together: in ascending order by magnitude and complexity...networking, coordination, cooperation, and collaboration. Our current fascination with so-called "results" (effects) has inclined us to overlook and underestimate the many critical "means" by which effects are produced. We neglect to focus on and evaluate the many "efforts" required to produce these juicy results (as long as they're performed well enough, right?), and the most critical effort, I believe, typically falls under the ill-defined or completely empty notion of collaboration. We can do better. I believe the presence of collaboration can be evaluated; and logically, I believe the absence of collaboration can be evaluated. Look at Congress. They don't appear to be collaborating, do they? Maybe it's unfair to use Himmelman's definition of collaboration to evaluate Congress, "exchanging information, altering activities, sharing resources, and enhancing the capacity of other partners for mutual benefit and to achieve a common purpose," but why wouldn't we want this dimension (collaboration) of any effort to be treated comprehensively and deliberately by collecting and making use of evaluative information so that it along with other desired internal outcomes might have a better chance of producing desired external outcomes?

(I had already begun this post, when I noticed a discussion on LinkedIn's Performance Measurement group. If you can navigate to that discussion, you might find it of interest.)
 

Tuesday, August 9, 2011

Culture should not eat strategy; they should dine together, shouldn't they?


It’s a lot easier to talk about accountability and transparency than it is to put them into practice in meaningful and effective ways.  The very decision to put accountability and transparency into practice can, in fact, be an important step in transforming the very culture that some say eats strategy for lunch.  Who can afford that kind of lunch!  The expression, culture eats strategy for lunch is, unfortunately, a realistic reminder never to underestimate the corrosive power of people unwilling or unable to work together well enough to survive; there is nothing more dangerous to the effective execution of a strategy than the partial or complete absence of collaboration.  I prefer to believe that the lack of collaboration is due not so much to an unwillingness to work together but rather to the inability to work together efficiently and equitably as guided by a single unifying strategic vision with meaningful measures.  

A tool that I have used, the Performance Blueprint, which I’ll be writing more about over the next few days, helps all of the performers associated with a given strategy, whether they are part of the required EFFORT or the desired EFFECT or both.  


Like a strategy map, the Performance Blueprint helps performers locate themselves.  For a given strategy, there is a constellation of performers and a corresponding constellation of immediate and not so immediate measures to gauge the attainment or approximation of desired progress and/or outcomes.  In this big picture view of the Blueprint it helps to understand the questions behind its moving parts and the distinction between effort and effect, which is where we will pick up again.