|5 Simple Rules for Evaluating Collective Impact
[By: Mark Cabaj]
This is an excerpt from an article originally published in a special issue of
The Philanthropist focused on Collective Impact
The astonishing uptake of “Collective Impact” is the result of a perfect storm. In the face of stalled progress on issues such as high school achievement, safe communities, and economic well-being, a growing number of community leaders, policy makers, funders, and everyday people have been expressing doubt that “more of the same" will enable us to effectively address the challenges we face. In the meantime, social innovators have been relentlessly experimenting with an impressive diversity of what we can now call “Collective Impact” prototypes and learning a great deal about what they look like, what they can and cannot do, where they struggle, and where they thrive.
Then along came John Kania and Mark Kramer (FSG), who described the core ideas and practices of the first generation of Collective Impact experiments in a 2011 article for the Stanford Social Innovation Review. It was this skilfully communicated idea, presented by very credible messengers to a critical mass of hungry early adopters that seems to have created a “tipping point” in our field and an impressive interest in this approach to addressing complex issues. In this article, I describe five simple rules for evaluating Collective Impact efforts.
Rule #1: Use Evaluation to Enable - Rather than Limit - Strategic Learning
For evaluation to play a productive role in a Collective Impact initiative, it must be conceived and carried out in a way that enables - rather than limits - the participants’ ability to learn from their efforts and make shifts to their strategy. This requires them to embrace three inter-related ideas about complexity, adaptive leadership, and a developmental approach to evaluation. If they do not, traditional evaluation ideas and practices will be the “tail that wags the dog” and end up weakening the work of Collective Impact.
While most Collective Impact participants would agree that they are wrestling with complex problems, often their response is to continue to operate as if trying to solve simple issues on steroids. The only way to move the needle on community issues is to embrace an adaptive approach to wrestling with complexity. This means replacing the paradigm of pre-determined solutions and “plan the work and work the plan” stewardship with a new style of leadership that encourages bold thinking, tough conversations and experimentation, planning that is iterative and dynamic, and management organized around a process of learning-by-doing.
Embracing a complexity lens and an adaptive approach to tackling tough community issues has significant implications for evaluating Collective Impact efforts. It means making
Collective Impact partners - not external funders - the primary audience of evaluation. It requires finding ways to provide participants with real-time - as opposed to delayed and episodic - feedback on their efforts and on the shifting context so that they can determine whether their approach is roughly right or if they need to change direction.
Participants are required to eschew simplistic judgements of success and failure and instead seeks to track progress towards ambitious goals, uncover new insights about the nature of the problem they seek to solve, and figure out what does and does not work in addressing it. They must give up on fixed evaluation designs in favour of ones that are flexible enough to co-evolve with their fast-moving context and strategy. In short, they need to turn traditional evaluation upside down and employ what is called Developmental Evaluation by some and Strategic Learning by others.
Rule #2: Employ Multiple Designs for Multiple Users
With so many diverse players, so many different levels of work, and so many moving parts, it is very difficult to design a one-size-fits-all evaluation model for a Collective Impact effort. More often than not, Collective Impact efforts seem to require a score of discrete evaluation projects, each worthy of its own customized design. Even straightforward developmental projects require a diverse and flexible evaluation strategy. For example, in a long-time partnership between a half-dozen schools, service agencies, and funders to improve the resiliency of vulnerable kids in the inner core of a major Canadian city, it was determined that three broad “streams” of assessment were required:
- School principals and service providers wanted evaluative data in the spring to help them improve their service plans for the upcoming school year;
- The troika of funders required evaluative data to “make the case” for continued funding, with each funder requiring different types of data at different times of the year; and
- The partnership’s leadership team wanted a variety of questions answered to help them adapt the partnership to be more effective and ready the group to expand the collaboration to more schools.
In order to be useful, this Collective Impact group required what Michael Quinn Patton, one of the world’s most influential evaluators, calls a “patch evaluation design”: multiple (sometimes overlapping) evaluation processes employing a variety of methods (e.g., social return on investment, citizen surveys), whose results are packaged and communicated to suit diverse users who need unique data at different times.
The idea of multiple evaluation consumers and designs will not be a hit with everyone. However, the benefit of crafting flexible evaluation designs is that are more likely to provide Collective Impact decision-makers with the relevant, useable, and timely evaluative feedback they need to do their work properly.
Rule #3: Shared Measurement If Necessary, But Not Necessarily Shared Measurement
The proponents of Collective Impact place a strong emphasis on developing and using shared measurement systems to inform the work. In their first article on Collective Impact, Kania and Kramer (2011) state, "Developing a shared measurement system is essential to collective impact. Agreement on a common agenda is illusory without agreement on the ways success will be measured and reported. Collecting data and measuring results consistently on a short list of indicators at the community level and across all participating organizations not only ensures that all efforts remain aligned, it also enables the participants to hold each other accountable and learn from each other’s successes and failures."
I could not agree more. In fact, I will add another reason that shared measurement is important for collective action. The process of settling on key outcomes and measures can sharpen a Collective Impact group’s thinking about what they are trying to accomplish. The case for robust measurement processes in Collective Impact efforts is overwhelming. While the case for shared measurement is strong and the practice increasingly robust there are at least five things for Collective Impact practitioners to keep in mind while crafting a common data infrastructure:
- Shared Measurement Is Critical but Not Essential - The key players in the community-wide effort in Tillamook County in Oregon to reduce teen pregnancy admit that they had “significant measurement problems,” but this did not prevent them from reducing teen pregnancy in the region by 75% in ten years.
- Shared Measurement Can Limit Strategic Thinking - Groups that pre-determine the indicators to be measured, are inherently limiting the scope of their observations. Collective Impact participants should focus on strategies with the highest opportunities for impact, not ones that offer greater prospects for shared measurement.
- Shared Measurement Requires “Systems Change.” In order to solve the “downstream problem” of fragmented measurement activities, local Collective Impact groups need to go “upstream” to work with policy makers and funders who create that fragmentation in the first place. For shared measurement to work, policy makers and funders must work together with local leaders to align their measurement expectations and processes.
- Shared Measurement is Time Consuming and Expensive. While it is true that innovations in web-based technology have dramatically reduced the cost of operating shared measurement systems, it can still take a long time and a surprisingly large investment to develop, maintain, and adapt such systems.
- Shared Measurement Can Get in the Way of Action. Collective Impact initiatives should avoid trying to design large and perfect measurement systems up front, opting instead for “simple and roughly right” versions that drive - not distract - from strategic thinking and action.
All in all, it is important that we not oversell the benefits, underestimate the costs, or ignore the perverse consequences of creating shared measurement systems. When developed and used carefully, they can be important ingredients to a community’s efforts to move the needle on a complex issue. Poorly managed, they can simply get in the way.
Rule #4: Seek Out Intended & Unintended Outcomes
All Collective Impact activities generate anticipated and unanticipated outcomes. Participants and evaluators need to try to capture both kinds of effects if they are serious about creating innovation and moving the needle on complex issues. Unanticipated outcomes can be good, bad, or somewhere in-between.
Unfortunately, conventional evaluation thinking and methods have multiple blind spots when it comes to complex change efforts. Logic models encourage strategists to focus too narrowly on the hoped-for results of a strategy, ignoring the diverse ripple effects. Happily, it is possible for Collective Impact participants and evaluators to adopt a wide-angle lens on outcomes. This begins with asking better questions: rather than ask “Did we achieve what we set out to achieve?” Collective Impact participants and their evaluators should ask, “What have been ALL the effects of our activities? Which of these did we seek and which are unanticipated? What is working (and not), for whom and why? What does this mean for our strategy?” Simply framing outcomes in a broader way will encourage people to cast a wider net in capturing the effects of their efforts.
In the end, however, the greatest difficultly in capturing unanticipated outcomes lies more in the reluctance of Collective Impact participants to seek them out than in the limitations of methodology or the skills of evaluators. Many Collective Impact participants are so conditioned by results-based-accountability and management-by-objectives that they can’t see the “forest of results” because their eyes are focused on “the few choice trees” that they planted. It may well take a very long time to create a culture where people are deeply curious about all the effects of their work, so let’s push for having unanticipated outcomes as part of any Collective Impact conversation wherever and whenever we can and see how far we can get.
Rule #5: Seek Out Contribution - Not Attribution - to Community Changes
One of the most difficult challenges for the evaluators of any intervention - a project, a strategy, a policy - is to determine the extent to which the changes that emerge in a community are attributable to the activities of the would-be change makers or to some other non-intervention factors.
The question of attribution is a major dilemma for participants and evaluators of Collective Impact initiatives. Collective Impact participants need to sort out the “real value” of their change efforts and the implications for their strategy and actions, yet determining attribution is the most difficult challenge in an evaluation of any kind.
The concept and methodology of Contribution Analysis offers an alternative approach which acknowledges that multiple factors are likely behind an observed change or changes and therefor focuses on understanding the contribution of the Collective Impact effort activities to the change. Despite the obvious benefits of the approach, the methodology is still not widely employed nor well developed in the field of community change. This must change. If Collective Impact stakeholders are serious about understanding the real results of their activities and using evidence - not intuition - to determine what does and does not work, they will make contribution analysis a central part of their evaluation strategy.
Evaluation is an intrinsic component of the Collective Impact framework that enables the rapid feedback loop that is so critical to adjusting strategies, divining innovations, and supporting the continuous communication among partners. As one of Collective Impact’s five key conditions, Evaluation also ultimately provides a way to assess the overall efficacy of these complex initiatives in the longer term.
Back to top.