Tamarack
FacebookTwitterRSSContactStaff Directory
About Tamarack Learn Events Join a Community
  Learning Centre  

About the Learning Centre

The Learning Centre, established in 2003, is designed to create a fluid, creative system of documenting community building activity and delivering this learning to organizations. The centre has a threefold purpose: to broadly disseminate knowledge gathered through research and practical experience; to help communities increase their power through learning; and to generate knowledge about community engagement so as to advance the field. Learn more about the Learning Centre here.

Resource Library - Explore Tamarack's community engagement resources - including research, articles and related links.

Be sure to sign up to receive Engage!, Tamarack's free monthly e-magazine, that helps you to stay current on the latest developments in the field of community engagement.

Subscribe here or read one of our latest issues below.

Engage!
Beautiful thinking for November from Tamarack Institute

 

 

In this Issue

What is the Future of Collective Impact?
[By: Sylvia Cheuy]

The first-ever Collective Impact Summit attracted 300 learners from a diversity of sectors and eight different countries to spend five days learning together. This is a testament to the incredible interest that Collective Impact has gained in just three short years. But where is all of this headed? What is the future of Collective Impact? Watch and see the insights this question sparked with the Summit's  keynotes presenters.

Collective Impact was identified as "the #2 buzzword in the field in 2011" and there are now a multitude of self-identified Collective Impact initiatives underway across North America and beyond. At the Collective Impact Summit, Mark Cabaj suggested that, we are now entering the time of Collective Impact 3.0. He outlined the evolution of Collective Impact this way:

  • Collective Impact 1.0 was the time when many groups were experimenting independently with a new, collaborative way of tackling tough community issues using the principles of Collective Impact but without a common language to describe this way of working. 

  • Collective Impact 2.0 began with the publishing of John Kania and Mark Kramer's first article on Collective Impact which in the Stanford Social Innovation Review in 2011. Entitled, Collective Impact, the article a "skilfully communicated idea, presented by very credible messengers to a critical mass of hungry early adopters, that seems to have created a “tipping point” in our field and an impressive interest in this approach to ad­dressing complex issues."

  • Collective Impact 3.0 - which is now - is a time when practitioners must help to deepen this emerging field of practice.

The rapid uptake of Collective Impact is generating many questions and plenty of debate. There is some concern that the popularity of Collective Impact is resulting in some organizations and collaboratives adopting the language of Collective Impact but not actually applying it to their work. This is a legitimate concern. But for those working on complex community issues who find the promise of Collective Impact compelling, the two central questions we need to focus on are:

  • Can Collective Impact be effective?; and,
  • What are the practices, capacities and enabling ecology required to support Collective Impact?

We need to deepen our shared understanding of the field Collective Impact. To do this, Collective Impact practitioners need to focus our efforts on:

  • Co-developing robust practices (principles, methods and techniques) of Collective Impact;

  • Co-building the capacity we need to support these practices (e.g. skills, mental models, spirit); and,

  • Co-creating the ecology required to support this work (e.g. learning networks, policies, resources and culture)

By sharing effective case studies, ideas and resources via learning networks we will be actively engaged in creating a positive future for this very promising field of practice.

Share this:Share this on Facebook Share this on Twitter Share this on LinkedIn

Learn More:

Bar Chart5 Simple Rules for Evaluating Collective Impact
[By: Mark Cabaj]

This is an excerpt from an article originally published in a special issue of
The Philanthropist focused on Collective Impact

The astonishing uptake of “Collective Impact” is the result of a perfect storm. In the face of stalled progress on issues such as high school achievement, safe communities, and economic well-being, a growing number of community leaders, policy makers, funders, and everyday people have been expressing doubt that “more of the same" will enable us to effectively address the challenges we face. In the meantime, social innovators have been relentlessly experimenting with an impressive diversity of what we can now call “Collective Impact” prototypes and learning a great deal about what they look like, what they can and cannot do, where they struggle, and where they thrive.

Then along came John Kania and Mark Kramer (FSG), who described the core ideas and practices of the first generation of Collective Impact experiments in a 2011 article for the Stanford Social Innovation Review. It was this skilfully communicated idea, presented by very credible messengers to a critical mass of hungry early adopters that seems to have created a “tipping point” in our field and an impressive interest in this approach to addressing complex issues. In this article, I describe five simple rules for evaluating Collective Impact efforts.

Rule #1: Use Evaluation to Enable - Rather than Limit - Strategic Learning

For evaluation to play a productive role in a Collective Impact initiative, it must be conceived and carried out in a way that enables - rather than limits - the participants’ ability to learn from their efforts and make shifts to their strategy. This requires them to embrace three inter-related ideas about complexity, adaptive leadership, and a developmental approach to evaluation. If they do not, traditional evaluation ideas and practices will be the “tail that wags the dog” and end up weakening the work of Collective Impact.

While most Collective Impact participants would agree that they are wrestling with complex problems, often their response is to continue to operate as if trying to solve simple issues on steroids. The only way to move the needle on community issues is to embrace an adaptive approach to wrestling with complexity. This means replacing the paradigm of pre-determined solutions and “plan the work and work the plan” stewardship with a new style of leadership that encourages bold thinking, tough conversations and experimentation, planning that is iterative and dynamic, and management organized around a process of learning-by-doing.

Embracing a complexity lens and an adaptive approach to tackling tough community issues has significant implications for evaluating Collective Impact efforts. It means making Collective Impact partners - not external funders - the primary audience of evaluation. It requires finding ways to provide participants with real-time - as opposed to delayed and episodic - feedback on their efforts and on the shifting context so that they can determine whether their approach is roughly right or if they need to change direction.

Participants are required to eschew simplistic judgements of success and failure and instead seeks to track progress towards ambitious goals, uncover new insights about the nature of the problem they seek to solve, and figure out what does and does not work in addressing it. They must give up on fixed evaluation designs in favour of ones that are flexible enough to co-evolve with their fast-moving context and strategy. In short, they need to turn traditional evaluation upside down and employ what is called Developmental Evaluation by some and Strategic Learning by others.

Rule #2: Employ Multiple Designs for Multiple Users

With so many diverse players, so many different levels of work, and so many moving parts, it is very difficult to design a one-size-fits-all evaluation model for a Collective Impact effort. More often than not, Collective Impact efforts seem to require a score of discrete evaluation projects, each worthy of its own customized design. Even straightforward developmental projects require a diverse and flexible evaluation strategy. For example, in a long-time partnership between a half-dozen schools, service agencies, and funders to improve the resiliency of vulnerable kids in the inner core of a major Canadian city, it was determined that three broad “streams” of assessment were required:

  • School principals and service providers wanted evaluative data in the spring to help them improve their service plans for the upcoming school year;
  • The troika of funders required evaluative data to “make the case” for continued funding, with each funder requiring different types of data at different times of the year; and
  • The partnership’s leadership team wanted a variety of questions answered to help them adapt the partnership to be more effective and ready the group to expand the collaboration to more schools.

In order to be useful, this Collective Impact group required what Michael Quinn Patton, one of the world’s most influential evaluators, calls a “patch evaluation design”: multiple (sometimes overlapping) evaluation processes employing a variety of methods (e.g., social return on investment, citizen surveys), whose results are packaged and communicated to suit diverse users who need unique data at different times.

The idea of multiple evaluation consumers and designs will not be a hit with everyone. However, the benefit of crafting flexible evaluation designs is that are more likely to provide Collective Impact decision-makers with the relevant, useable, and timely evaluative feedback they need to do their work properly.

Rule #3: Shared Measurement If Necessary, But Not Necessarily Shared Measurement

The proponents of Collective Impact place a strong emphasis on developing and using shared measurement systems to inform the work. In their first article on Collective Impact, Kania and Kramer (2011) state, "Developing a shared measurement system is essential to collective impact. Agreement on a common agenda is illusory without agreement on the ways success will be measured and reported. Collecting data and measuring results consistently on a short list of indicators at the community level and across all participating organizations not only ensures that all efforts remain aligned, it also enables the participants to hold each other accountable and learn from each other’s successes and failures."

I could not agree more. In fact, I will add another reason that shared measurement is important for collective action. The process of settling on key outcomes and measures can sharpen a Collective Impact group’s thinking about what they are trying to accomplish. The case for robust measurement processes in Collective Impact efforts is overwhelming. While the case for shared measurement is strong and the practice increasingly robust there are at least five things for Collective Impact practitioners to keep in mind while crafting a common data infrastructure:

  • Shared Measurement Is Critical but Not Essential - The key players in the community-wide effort in Tillamook County in Oregon to reduce teen pregnancy admit that they had “significant measurement problems,” but this did not prevent them from reducing teen pregnancy in the region by 75% in ten years.
  • Shared Measurement Can Limit Strategic Thinking - Groups that pre-determine the indicators to be measured, are inherently limiting the scope of their observations. Collective Impact participants should focus on strategies with the highest opportunities for impact, not ones that offer greater prospects for shared measurement.
  • Shared Measurement Requires “Systems Change.” In order to solve the “downstream problem” of fragmented measurement activities, local Collective Impact groups need to go “upstream” to work with policy makers and funders who create that fragmentation in the first place. For shared measurement to work, policy makers and funders must work together with local leaders to align their measurement expectations and processes.
  •  Shared Measurement is Time Consuming and Expensive. While it is true that innovations in web-based technology have dramatically reduced the cost of operating shared measurement systems, it can still take a long time and a surprisingly large investment to develop, maintain, and adapt such systems.
  • Shared Measurement Can Get in the Way of Action. Collective Impact initiatives should avoid trying to design large and perfect measurement systems up front, opting instead for “simple and roughly right” versions that drive - not distract - from strategic thinking and action.

All in all, it is important that we not oversell the benefits, underestimate the costs, or ignore the perverse consequences of creating shared measurement systems. When developed and used carefully, they can be important ingredients to a community’s efforts to move the needle on a complex issue. Poorly managed, they can simply get in the way.

Rule #4: Seek Out Intended & Unintended Outcomes

All Collective Impact activities generate anticipated and unanticipated outcomes. Participants and evaluators need to try to capture both kinds of effects if they are serious about creating innovation and moving the needle on complex issues. Unanticipated outcomes can be good, bad, or somewhere in-between.

Unfortunately, conventional evaluation thinking and methods have multiple blind spots when it comes to complex change efforts. Logic models encourage strategists to focus too narrowly on the hoped-for results of a strategy, ignoring the diverse ripple effects. Happily, it is possible for Collective Impact participants and evaluators to adopt a wide-angle lens on outcomes. This begins with asking better questions: rather than ask “Did we achieve what we set out to achieve?” Collective Impact participants and their evaluators should ask, “What have been ALL the effects of our activities? Which of these did we seek and which are unanticipated? What is working (and not), for whom and why? What does this mean for our strategy?” Simply framing outcomes in a broader way will encourage people to cast a wider net in capturing the effects of their efforts.

In the end, however, the greatest difficultly in capturing unanticipated outcomes lies more in the reluctance of Collective Impact participants to seek them out than in the limitations of methodology or the skills of evaluators. Many Collective Impact participants are so conditioned by results-based-accountability and management-by-objectives that they can’t see the “forest of results” because their eyes are focused on “the few choice trees” that they planted. It may well take a very long time to create a culture where people are deeply curious about all the effects of their work, so let’s push for having unanticipated outcomes as part of any Collective Impact conversation wherever and whenever we can and see how far we can get.

Rule #5: Seek Out Contribution - Not Attribution - to Community Changes

One of the most difficult challenges for the evaluators of any intervention - a project, a strategy, a policy - is to determine the extent to which the changes that emerge in a community are attributable to the activities of the would-be change makers or to some other non-intervention factors.

The question of attribution is a major dilemma for participants and evaluators of Collective Impact initiatives. Collective Impact participants need to sort out the “real value” of their change efforts and the implications for their strategy and actions, yet determining attribution is the most difficult challenge in an evaluation of any kind.

The concept and methodology of Contribution Analysis offers an alternative approach which acknowledges that multiple factors are likely behind an observed change or changes and therefor focuses on understanding the contribution of the Collective Impact effort activities to the change. Despite the obvious benefits of the approach, the methodology is still not widely employed nor well developed in the field of community change. This must change. If Collective Impact stakeholders are serious about understanding the real results of their activities and using evidence - not intuition - to determine what does and does not work, they will make contribution analysis a central part of their evaluation strategy.

Evaluation is an intrinsic component of the Collective Impact framework that enables the rapid feedback loop that is so critical to adjusting strategies, divining innovations, and supporting the continuous communication among partners. As one of Collective Impact’s five key conditions, Evaluation also ultimately provides a way to assess the overall efficacy of these complex initiatives in the longer term.

Share this:Share this on Facebook Share this on Twitter Share this on LinkedIn

Learn More:

Back to top.

 
Mobilizing Community Impact in the Keystone State
[By: Jay Connor]

Pennsylvania is known as the “keystone” state. Some say the term is derived from the tie-breaking role Pennsylvania played in approving the Declaration of Independence; others ascribe the nick name to its geographical position as the lynch pin of the thirteen original colonies. A keystone is the stone, usually at the top of an arch upon which all the other stones depend to align and distribute the weight of the entire structure. Whatever the source of the name, it seems appropriate that a major break-through in my work with communities seeking transformative outcomes, developed in Erie, PA and that community’s quest for a keystone outcome.

Erie, PA is in the northwest corner of the state. The surrounding county has a population of about 280,000, fairly evenly divided between urban and rural. Until a Sunday morning in early 2008, most in Erie, like many other rust-belt communities, felt that they were simply suffering some rough times adjusting to a changed economic landscape. But that morning everything changed. The headline in the Erie Times read: “Erie has the highest poverty rates in the state of Pennsylvania!” That statement galvanized the community to action. That was not how they saw themselves nor was it the future they had envisioned for their children.

Erie sought solutions from other communities, including the Hamilton Roundtable on Poverty Reduction and its work to realize its community aspiration: “making Hamilton the best place to raise a child.” Erie instinctively understood that an aspiration that the whole community could “own” and act upon was a way to meaningfully change their fortunes. In order to change a community’s outcomes you have to change it behaviors. This meant engaging not just the content experts, but also, the context experts - the community as a whole. Calling itself Erie Together, the initiative declared the aspiration to "make the Erie region a community where everyone can learn, work and thrive." Community ownership in Erie Together was fostered by its description: “not a social service agency; not a social service program; it IS a county-wide civic movement to prevent and reduce poverty and elevate prosperity."

 

 

 

 


 

From 2008 and 2011, the community made great strides in process, structure and engagement. Monthly meetings of the Action Teams averaged 125 community members per meeting and progress was recorded in a number of areas including: kindergarten-readiness, career engagement, sustainable incomes and work force training.  But perhaps even more importantly, Erie Together was successful in mobilizing significant community participation with nearly 350 people participating in its November 2011 Forum. They also demonstrated the ability to align the community’s work in order to engage complete systems.  For example, they were successfully able to get all twelve School Districts and dozens of early childhood providers to agree on a single definition of readiness for kindergarten.

In preparation for the November 2011 Forum, Erie Together's four Action Teams met together to assess ways for their work to move even more rapidly. They saw that the work on all their issues was important, but that working on everything simultaneously was stripping them of resources and speed. They wanted to change the life opportunities for this generation as well as the next. Was there a better way to sequence their work in order for them to have more than marginal impact? In essence, was there a keystone outcome?

As earlier in the process, the folks in Erie began tackling this challenge by doing some research. They discovered three things, which would affect the trajectory of their community from that point forward. First, one of the outcomes they had already targeted - reading at or above grade level by the end of 3rd Grade - has been shown to have a significant positive impact on a whole range of other outcomes they cared deeply about. Specifically:

  • Children who succeed at reading by the end of third grade are 4x more likely to graduate from high school.
  • Children who succeed at reading by the end of third grade are 6x less likely to be involved with the criminal justice system.
  • Children who succeed at reading by the end of third grade can expect to earn twice the future salary as their peers.
  • Children who succeed at reading by the end of third grade can expect better health outcomes
  • Children who succeed at reading by the end of third grade are much less likely to be teen parents
  • Children who are reading at advanced or proficient levels are significantly more likely to pursue STEM and post-secondary education.

Second, in addition to these clustered outcomes that had resonance across the Erie Together Action Teams, a further benefit to the community and its efforts is that achieving 3rd Grade reading success is much less costly than the present price of failure which included: remediation, retention and higher referrals to Special Education. A rough calculus put this savings at greater than $5 million per year just in the elementary schools. Thus, Erie saw that it could in part fund its work going forward by strategically targeting the keystone outcome of 3rd Grade reading ability.

The third, and final, area of research centred on the question: “If dramatically improving 3rd Grade reading outcomes is to be our keystone outcome, what have others experienced and how could we move the best practices forward in Erie?” There were four longitudinal - move the needle - examples they could find to answer these questions:

  • The NAEP results since the late 1980’s showed nearly flat results in third grade reading performance across the US;
  • The Strive Cincinnati results showed 16% improvement for the ten-year period from 2004 to 2013;
  • A relatively new effort, Grade-Level Reading was targeting 10% increase in advanced and proficient readers by 2020; and,
  • Learning Ovations, based on nearly $10 million in research funded by NIH and IES, has been able to show a 45% improvement - to all children averaging 5th grade levels by the end of 3rd grade - over a three year period, 2009 to 2012.

Erie began implementing Learning Ovations in partnership with the United Way, Erie County School Districts and US Department of Education in the fall of 2013. By aligning their collective efforts on the common agenda: 3rd Grade Reading outcomes, the Work Teams of Erie Together began the journey to replicate the results of seven randomized control trials, reported in 23 peer review journal articles. In 3 years, Erie Together has resulted in:

  • The number of 3rd Grade students in Erie who were below the basic reading level decreased 27% - from 33% to 6%;
  • The number of 3rd Grade students reading at the basic level decreased 14% from 33% to 19%
  • The number of 3rd Grade students reading at the proficient and advanced level increased 41% from 34% to 75%

The story of Erie Together’s success illustrates the powerful impact that is possible when a community comes together and agrees to work, across sectors, and sets a common goal to improve their shared future.

Learn More: 

Back to top.
Learning Highlights from the Collective Impact Summit
[By: Lisa Attygalle]

Over 300 learners from eight countries converged in Toronto for the first-ever Collective Impact Summit. Joining them were thought-provoking keynote presenters whose ideas and thinking are continuing to shape the practice of Collective Impact into the future. Each morning of the Summit, a summary of the prior day’s learning was shared which included:

  • Keynote highlights
  • Learning Wall highlights
  • Social Media highlights

Sharing back this collective knowledge was a key piece in the learning journey. The articulating of what was collectively learned served as a building block for the next day’s new learning.

For those of you who weren’t able to join us at the Summit in person, and are eager to vicariously experience the rich learning that was co-created and shared, you can review a summary of the daily highlights.

Learn More:

Back to top.

Only 5 More Months till Spring: Recipe for Enchiladas
[By: Paul Born]

Our recipe last month was such hit we are bringing forth another one this month. I chose the enchiladas recipe for two reasons. First, I loved the story of throwing a winter party with friends that accompanies this recipe. Second, this month I start cooking again as a volunteer for the Out of the Cold Program. We make nearly 250 of these enchiladas every month to feed people who are cold, lonely and hungry. Well, maybe there is a third reason too: I think that this is the best enchilada recipe anywhere.

Learn More:

Back to top.

Updates from Tamarack's Learning Communities
  • Heart Work for #Social Good  - Jo Cavanagh likens her work, as CEO of Family Life a community-owned, social enterprise in Victoria, Australia, as heart work. Reflecting on her experience at the Collective Impact Summit in Toronto she highlights the importance of involving context experts - the intended beneficiaries of a Collective Impact effort - as central to this work. 

  • On Creativity and Collective Impact - Mark Holmgren shares reflections from a dinner that he co-hosted with Elayne Greeley at the Collective Impact Summit exploring the intersection of Collective Impact, creativity and personal change. He believes that "because all too often practitioners of Collective Impact discount the creativity required to do this work and also tend to discount their own creativity."

  • The Next Innovation - James Hughes highlights many successful initiatives demonstrating clinical and cost effectiveness in addressing mental health services and wonders why this proven knowledge and evidence-based practice is not more widely available to the millions of Canadians suffering with mental illness.

  • Snapping Forward, Snapping Back - Liz Weaver reflects on the term “snap back” coined by Brenda Zimmerman at the Collective Impact Summit to refer to the forces in our external systems that often have us "snapping back" from experimental approaches like Collective Impact to old ways if we are not diligent about thinking differently and adopting new mindsets about our work.
  • 7 Habits of Highly Connected People - Cormac Russell shares seven lessons his grandmother taught him about how to be a good neighbourhood connector and why it is an important skill to cultivate.

  • Networks vs Community  - Derek Alton shares an article from one of his favourite blogs which explores the differences between networks and communities and offers suggestions for how to deepen the experience of community in your own life.
Tamarack Events
About Engage!

Engage! e-magazine is published by Tamarack - An Institute for Community Engagement, to bring you inspiration, ideas, and resources to envision and create vibrant communities. We would love your ideas to help us improve our format. Please email us with your comments.

Share this issue of Engage!Share this on Facebook Share this on Twitter Share this on LinkedIn

 
i