Shared measurement is when charities and social enterprises that are working towards similar goals reach a common understanding of what to measure and develop the tools to do so. The Blueprint for shared measurement explores successful shared measurement approaches and the associated benefits and challenges.
This review focuses on shared measurement approaches that allow organisations to define and measure their shared outcomes. It reviews existing literature in the UK and US, and analyses 20 shared measurement projects. From this it identifies a number of factors that are key to developing successful shared measurement. These success factors look at the pre-conditions necessary for shared measurement, as well as key factors in developing, designing, scaling and sustaining shared approaches.
We would like to thank all those who supported and contributed to this research:
We are also grateful to Inspiring Impact’s funders for making this research possible: Big Lottery Fund; the Office for Civil Society; City Bridge Trust; Diana, Princess of Wales Memorial Fund; and Deutsche Bank.
Charities work to solve complex social issues—which often cannot be solved by one initiative alone, and require systemic, collaborative responses. Shared measurement can promote a systemic approach to understanding the issues we aim to tackle and help us learn what works best to solve social problems. It involves organisations working on similar issues developing a common understanding of what to measure and developing tools that can be used by charities, social enterprises and funders working towards similar goals.
Shared measurement aims to make it easier for organisations to learn from each other, save them the costs of developing their own tools, and build an evidence base of what works. It is an essential component in improving standards of impact measurement, allowing more consistency and comparability to improve the effectiveness of the charitable sector and changing more lives for the better.
This report reviews existing literature in the UK and US, and analyses 20 shared measurement projects. From this it identifies a number of factors that are key to developing successful shared measurement. These success factors look at the pre-conditions necessary for shared measurement, as well as key factors in developing, designing, scaling and sustaining shared approaches. These factors are summarised in the diagram below:
There is a way to go before shared measurement becomes common practice in the charitable sector. Charities, funders and sector bodies all have a role to play:
Charities and social enterprises, and their funders and investors, are increasingly interested in impact measurement to ensure they are using their resources to make the greatest difference to people’s lives. With dwindling funds and an increased demand for evidence from all types of funders, impact measurement has become a priority for the sector. NPC’s 2012 impact survey found that three quarters of all UK charities say they measure the impact of at least a small proportion of their work. However, the survey also showed that there are many barriers preventing impact measurement from being used to its full potential. Organisations report that not knowing how to decide on outcomes or where to find tools are some of the biggest barriers stopping them from measuring impact. They also cite a lack of funding for impact measurement and difficulty in analysing results. Another obstacle is different funders asking grantees for different types of information—over two thirds of funders ask their grantees for information tailored to them. Almost three quarters (72%) of charities want greater transparency and reporting of negative results. We believe a solution to many of these challenges is shared measurement—where more than one organisation measures impact using the same approach.
Shared measurement is both the product and process of taking a shared approach to impact measurement. In terms of the product, shared measurement is any tool that can be used by more than one organisation to measure impact. The process of shared measurement entails understanding a sector’s shared outcomes, often mapping out its theory of change. It also involves the engagement and collaboration needed to result in a shared approach. There are a number of important issues involved in developing a shared measurement approach. Strong leadership is central to getting organisations’ buy-in and sustaining the approach beyond initial development. Good collaboration is key to organisations working together to decide on shared outcomes and measures. It is also essential that a focus on impact is at the heart of the shared measurement approach and that high-quality methods are used. The ultimate aim of shared measurement is to build information about what works in solving social problems. Having comparable and robust data on impact can help us work out the different sequences of interventions needed to tackle different social issues. There are a number of features that are necessary for shared measurement. These features both describe what shared measurement entails and what it enables.
Key features of shared measurement
Shared outcomes: organisations using shared measurement should have consensus on the shared outcomes that their sector achieves and measure these shared outcomes using the same tools.
Consistent methodologies: organisations using a shared tool should use the same tools and consistent methods when measuring. This means having consistent research designs, similar sample sizes, similar analysis and consistent reporting of results.
Focus on measuring outcomes and impact: shared measurement should focus on measuring the difference a particular activity or organisation makes to an issue or group of people.
Agreement around what is measured: the outcomes and indicators used in a shared measurement approach should be relevant and meaningful to all those using the approach. This means that beneficiaries, front-line staff, senior management, trustees and funders all agree what the shared approach measures.
Clarity around a sector’s impact: shared measurement should involve understanding how a sector works together to solve a particular social problem. This can mean mapping out a sector’s theory of change or impact network.
Ability to compare: shared measurement should allow organisations to compare their results to those of similar organisations. This helps organisations put their impact data in context and learn about what approaches are most effective.
Shared measurement as we understand it has been developed by many different types of organisations: academic institutions, funders, government, consultancies, social enterprises, sector bodies and individual charities. This report looks at these different approaches to understand the key steps and success factors in developing, designing and implementing shared measurement.What is not included in this review?This review focuses on shared measurement approaches that allow organisations to define and measure their shared outcomes. There are a number of tools and methods that have elements of shared measurement that are not included in this review as they do not feature this ability to share outcomes. We therefore exclude common tools including standardised scales, methods of analysis such as Social Return on Investment (SROI), cost-benefit analysis and Quality-adjusted life years (QALYs), and online data management systems or platforms. Each of these, however, is relevant to different stages of shared measurement. Standardised scales may be used as common tools once a sector has agreed on shared outcomes, different methods of analysis could be used once shared data is collected, and online platforms are essential to allow ease of use as well as sharing of data. Defining shared measurement
Shared measurement is an essential component in improving standards of impact measurement. More consistency and comparability is necessary if we are going to use impact measurement to learn about what works, improve practices and ultimately change more lives for the better. Shared measurement also reduces the duplication of effort in developing bespoke impact measurement tools for individual organisations.But shared measurement can be a contentious issue among charities and social enterprises—those we spoke to throughout this research highlighted concerns shared by many front-line organisations. Some fear that shared measurement may force them to measure outcomes that are not relevant to their work. Some worry that shared measurement may lead to benchmarking and competition on the basis of results. But the organisations we spoke to also told us that it is possible to find shared outcomes in a sector that are meaningful to many different organisations. They spoke about how shared measurement was essential for them to learn how to make the greatest impact in their sectors. As Victoria Hill of Co-ordinated Action Against Domestic Abuse (CAADA) put it, ‘organisations need to understand what is normal for their service. For example, [CAADA’s domestic violence services] can understand what level of mental health disclosure is usual and where they stand in relation to other services.’Our review of shared measurement highlights a number of important benefits.
This report looks at the history and background of shared measurement and defines what we understand as shared measurement. Through analysis of 20 different shared measurement approaches, we examine how shared measurement is developed and draw lessons from this for future initiatives.
These findings particularly examine:
We also look at the challenges of shared measurement and the benefits it has brought to organisations. We see this report as the first working paper of the Inspiring Impact shared measurement programme and welcome feedback and comments on what we have found.
This report will be useful to:
NPC has been developing and piloting measurement approaches for the last five years, and has piloted a number of shared measurement tools. Triangle Consulting Social Enterprise in the UK has also developed a number of shared tools—known as the Outcomes Stars. Few organisations, however, have reviewed shared measurement approaches to understand how they should ideally be developed. This report is the first that we know of to do this in a UK context.
Similar research has been carried out in the United States by a number of institutions. FSG, a US-based social impact consultancy, examined shared measurement approaches in the US to understand key steps in their development. The Urban Institute has also published research on developing shared outcomes frameworks. Grantmakers for Effective Organizations (GEO) also examined shared measurement approaches in its 2012 report on evaluation. This previous research in the US has looked at steps and success factors involved in shared measurement, as well as examining a number of examples.
FSG reviewed several examples of shared measurement to understand how they were developed, and identified three different types of shared measurement. These are:
FSG’s outline of different shared measurement approaches is similar to the key features of shared measurement outlined in this report. Shared measurement may not involve organisations collaborating to agree on shared outcomes, but it does involve the use of common tools. Comparative performance systems take a more streamlined approach in order to compare results, but again may not involve collaboration and ongoing learning. Adaptive learning systems involve shared measurement among organisations with an understanding of their impact networks and a facility to learn from the collection of shared data.
FSG looked at a number of factors that appear to be central to developing these shared measurement systems. Its research highlights the importance of issues such as leadership, engagement, use of technology and ongoing support.
Box 3: FSG’s elements of success in developing shared measurement
The Urban Institute has researched common outcomes and indicators for various types of community services. Its report on developing community-wide outcome indicators looks at how funders and service providers can work together to develop a common core set of indicators. Its research outlines four stages to developing common outcomes: planning, meeting, finalising outcomes and indicators, and implementing the approach. It highlights important issues including funder support and participation, stakeholder engagement, getting consensus on outcomes and providing support during the pilot phase.
GEO looked at shared measurement in its 2012 report on the essentials of evaluation. It reviewed several foundations that used a shared measurement approach in their evaluation strategies. The examples show the importance of reducing the measurement burden for grantees, engaging with grantees, providing support, enabling grantees to exchange information and engaging multiple funders in an approach.
NPC has been trialling a number of approaches to shared measurement in the past five years. Our work to develop shared measurement approaches includes NPC’s Well-being Measure, a framework for shared measurement in the NEETs sector, a framework for shared measurement for charities working to improve prisoners’ family ties and the development of a shared measurement framework for the local Mind network.
NPC’s Well-being Measure is an easy-to-use online tool that can be used by charities to measure change in soft outcomes for children and young people. It has been validated though more than three years of research and testing. Currently more than 50 charities use the online tool. This enables them to automatically analyse results and generate reports.
NPC’s work to develop shared measurement frameworks in the areas of prisoners’ family ties, NEETs and for the local Mind network used a theory of change approach to map out common outcomes for a group of charities. These projects brought together charities and funders to develop shared approaches to impact measurement across their area of work. NPC then developed different measurement options for agreed common outcomes and piloted tools in the field.
Several important findings emerge from these previous studies, forming a starting point for this research. A number of common themes seem to be important for shared measurement:
NPC’s review of shared measurement builds on the findings of these previous studies and explores these issues in the context of the UK charity sector.
Findings from previous research were incorporated into a framework to guide and structure the research in this report. We used this framework to assess whether similar themes from previous research were present in the shared measurement approaches we reviewed.
The process of developing shared measurement cuts across a number of complex issues affecting the charity sector. Many of the themes we see in developing shared measurement are broader issues—for example, good collaboration, good leadership and high-quality impact measurement. Our findings address the steps that shared measurement approaches have taken to address these issues.
The approaches reviewed have been led by many different types of organisations. Some approaches have been driven by funders, some by charities or social enterprises. Some have been led by sector bodies while in other instances, consultants and think tanks have taken the lead. We have looked at a number of these initiatives to understand the success factors in developing shared measurement. Our review of 20 different shared measurement approaches highlights:
The following factors emerged from our interviews with developers as key to developing a successful shared measurement approach. These factors fall into four different stages, which are discussed in the following section:
Figure 1: Success factors for developing shared measurement
We also included two approaches in our review that have been established for a number of years. Because these approaches are quite established, it was not possible to interview the original developers. These approaches are:
A full list of these shared measurement examples as well as other examples can be found in the appendix.
Our research identified a number of common themes that underlie the development of a shared measurement approach. These themes are often conditions in place before development begins. The most common theme across our research was the development of shared measurement in response to a gap in evidence for a particular sector.
‘Without the appropriate tools, the network could not evidence the difference it was making.’
Child Bereavement Network
The majority of organisations we explored had come to a shared measurement approach due to a lack of appropriate evidence in their sector. This was either a lack of evidence connecting activities to long-term outcomes, or a lack of sensitive measurement tools. Shared measurement was chosen as the solution to this lack of evidence. Joy MacKeith from Triangle told us that one driver for the development of its homelessness star (see case study on page 31) was the London Housing Foundation’s view that interest in outcomes was growing, and that the homelessness sector needed to up its game in response. ‘They recognised that outcomes were coming onto the scene and wanted to equip the sector to respond and take the initiative.’
Similarly, Bethia McNeil from the Young Foundation explained that a crucial incentive in the development of the Catalyst framework of outcomes for young people was the lack of evidence connecting youth work with harder cost-saving outcomes, such as moving into employment or reducing reoffending. ‘There was a disconnect between long-term outcomes and what young people and youth organisations valued and felt they achieved.’
The lack of any appropriate or sensitive tool to measure improvements in children affected by bereavement was the main incentive in the development of NCB’s Child Bereavement Network (see case study on page 32), according to Alison Penny: ‘most of the tools were developed for children with mental health difficulties, but who had not been bereaved. Even though these symptoms might be similar, bereaved children have very different needs, which these tools were not picking up on. Without the appropriate tools, the network could not evaluate the difference it was making.’ The development of new questionnaires tailored to bereaved children is intended to bridge this gap and provide the network with the evidence it needs to show the difference it is making and understand what works best.
‘Our funders recognised that what we were doing was new and so were very flexible and open about how it went.’
Initial support from funders is crucial to allow a shared measurement approach to get off the ground. All the approaches we reviewed had received committed funding for a number of years to develop and pilot their tools. The majority had funding from a number of different sources which increased the projects’ stability.
For example, Bond had committed funding from the Department for International Development, as well as a number of NGOs, before it started to develop its Improve It framework (see case study on page 33). NCB had committed funding from several sources—it received grants from two grant-making trusts as well as funding from the Department of Health. Similarly, CAADA’s funding for the Insights tool lasted over a number of years (see case study on page 34). Victoria Hill from CAADA told us, ‘our funders recognised that what we were doing was new and so were very flexible and open about how it went.’
‘Having a small group of very engaged people with strategic freedom was key to success.’
The majority of approaches we reviewed had engaged a number of committed individuals or organisations before developing shared tools. In some cases, identifying and engaging organisations and individuals was a formal stage before developing tools. But for some organisations we spoke to, this stage had happened organically. For example, Joy MacKeith from Triangle told us how a number of champions and advocates were central to getting the Outcomes Stars off the ground. ‘We didn’t identify a steering group at the outset of the process, but the tools wouldn’t have happened without a number of key committed organisations.’ An organic approach to engaging organisations was important to the Outcomes Stars’ development—‘Many of these champions and advocates approached us to develop versions of the stars. This means that they really believe in the potential of the tools.’ This approach was similar to the development of Bond’s Improve-it framework. As Rob Lloyd, former head of effectiveness at Bond told us, ‘we decided to go with where the energy was; we worked with a core group before trialling the approach with other organisations.’
Our review highlighted a number of factors key to successfully setting up and developing a shared measurement approach. These factors help to ensure the approach is meaningful to the sector, and also sustainable. Taking a bottom-up approach and engaging front-line staff, senior management and funders in the development phase is crucial. Having an independent leader to drive the process also seems to be a strong success factor. This leader needs to be able to balance the demands of charities and funders, and promote the approach once it has been developed.
‘We worked primarily with the front-line workers and managers of services.’
Most of the shared measurement approaches we reviewed ensured that design and development were led by those working in the sector. While lead organisations’ methods differed, in general they allowed outcomes to be fed into a tool through a bottom-up process—allowing practitioners and front-line organisations to specify the outcomes important to them, rather than imposing pre-selected outcomes on the sector. Most then supplemented these through reviews of the academic literature, or consultation with funders and commissioners. But allowing those working in the field to specify outcomes important to them first was key.
Triangle took a decidedly bottom-up approach when developing the Outcomes Stars. Joy MacKeith told us that in the early development stages, ‘staff felt that most tools were not related to their ‘real work’ and did not accurately represent their achievements. To develop the star, we worked primarily with front-line workers and managers of services—and in later iterations, service-users themselves.’
A balance must be struck between taking a bottom-up approach and maintaining some coherency around outcomes. Substance faced the challenge of balancing consultation and coherence whilst developing Views (see case study on page 35), as director of external relations Neil Watson told us: initially ‘we got an overload of disparate information, which would have been impossible to include in a shared outcome framework, so instead we developed a longlist of potential outcomes and consulted with grantees on what to prioritise.’
‘We had a very strong advisory group made up of funders, sector bodies and front-line organisations.’
Including a diverse range of stakeholders in development of a shared measurement approach is key. From practitioners using the tool with service-users, to funders using it to monitor the impact of a grant, involving a wide group was a consistent theme in the approaches we reviewed. The Young Foundation ensured a mix of stakeholders, including young people, was involved throughout the design of its outcomes framework. Bethia McNeill told us, ‘we had a very strong advisory group made up of funders, sector bodies and front-line organisations. We held several focus groups with young people, commissioners, front-line organisations and investors.’The Child Bereavement Network also engaged a number of relevant stakeholders working in the area, according to Alison Penny: ‘we took a very adaptive iterative process to developing the tool, we had feedback from all groups and revised the tool in iterations. We set up six focus groups with practitioners, one with funders, two with parents and four with young people.’It is worth noting that many of the developers we spoke to cautioned against the risks of too much consultation—for example, losing a grasp over priority outcomes by trying to include too many things, or trying to make the shared measure a solution to every measurement need of the organisations involved. Therefore while it is important to include a diverse range of stakeholders, developers need to be mindful of the optimum number to engage.
‘Someone needs to hold that independent perspective, a bigger-picture agenda.’
In the majority of approaches we reviewed, the development of shared measurement was led by an organisation perceived as independent by the sector. This meant it was better able to balance the demands of charities and funders, and promote use of the approach following its development. In most cases, development was led either by a sector body, an academic institution, a think tank or consultancy or a national charity on behalf of a local network of charities. In successful cases, the independent leader tends to have a sense of ownership of the approach, and continues to promote its use. As Joy MacKeith from Triangle found, ‘it’s critical to have an organisation in place to provide ongoing support and development of the tool.’ In Bond’s case, its independence was important, as Rob Lloyd states: ‘being a sector body was important. It opened doors. You need trust and reach.’ An independent body is also important to bring funders on board and encourage them to use shared measurement without biasing their attitude to funding. This means that ideally the developer of shared measurement should not be a user of the final tool. Balancing the demands of the sector and promoting use of the tool would have a negative bias if the developer was also attempting to use the tool or fund on the basis of the tool. ‘Someone needs to hold that independent perspective, a bigger picture agenda,’ sums up Joy MacKeith from Triangle. ‘Saying no is part of the process. What’s best for an individual is not always best for the sector.’
A number of themes were identified as key to designing a successful shared measurement tool. The outcomes measured by the tool need to be meaningful to all stakeholders: front-line staff, senior management, investors and funders. There needs to be level of flexibility in the framework—where organisations may choose from similar sets of outcomes, but do not necessarily measure identical outcomes. To ensure the tool collects high-quality data, robust methods need to be used. It is also important that the tool itself is accessible and easy-to-use, both for front-line staff collecting data and beneficiaries responding to questions in the tool.
‘We applied a standard tool to enable organisations to measure their progress.’
It is essential that the tools used in shared measurement are robust and stand up to external scrutiny. The approaches we reviewed followed best practice when developing their tools where feasible. The Child Bereavement Network, for example, is extensively testing questionnaires it has developed to measure the impact of child bereavement services, to ensure they measure the outcomes intended, and that they are sufficiently sensitive to monitor change.
Community Evaluation Northern Ireland (CENI) established baselines and used distance-travelled techniques to develop its Measuring Change project, which involved designing a common outcomes framework for a BIG Lottery Fund community grants programme. Brendan McDonnell, CENI’s director, explained the pilot, which involved three steps. First, CENI reviewed commonly occurring project outcomes and rationalised theories of change from different programmes, allowing it to develop a common outcomes framework. CENI then held sessions with project stakeholders to estimate their baseline and progress against this framework using a distance-travelled technique. Finally, CENI analysed quantitative and qualitative data to estimate individual projects’ progress, and aggregated this to indicate programme impact. Brendan McDonnell explains, ‘we first got the funder to articulate their theory of change, to produce a common outcomes framework for the programme. We then applied a standard tool to enable projects to estimate their progress against these outcomes.’
‘There’s no point in developing robust tools if you don’t bring along organisations.’
The majority of outcome areas included in any shared measurement approach need to be relevant to all stakeholders. This includes beneficiaries, front-line practitioners, charity staff and funders. Involving a diverse range of stakeholders and making sure that outcomes reflect on-the-ground experiences of front-line practitioners will go some way to achieving this. However, before any outcome is finalised, developers need to ensure consensus among different stakeholders that outcomes are meaningful.
Rob Lloyd from Bond spoke to us about the importance of getting buy-in while developing the Improve-It toolkit. ‘It was necessary to do a lot of consultation. People need to feel comfortable with where you’re going. There’s no point in developing robust tools if you don’t bring along organisation. That’s why buy-in is so important.’
Home-Start UK’s MESH (Monitoring and Evaluation System Home-Start) tool for local Home-Starts was developed in part thanks to Home-Start’s experience of using tools that were not necessarily reflective of the outcomes it was achieving for all its service users when offering a universal access support service. Elizabeth Young, director of research, evaluation and policy at Home-Start UK, told us how the charity developed its system based on outcomes fed in from local Home-Starts through consultation. This ensured that outcomes measured were meaningful to families and front-line practitioners, as well as wider audiences.
‘It is essential that a tool is aligned with the case management process, so it is not a burden on the practitioner.’
Creating a simple and easy-to-use tool is essential, and is often the most difficult aspect to get right. Many of the organisations we interviewed spoke about the challenge of ensuring the shared measurement approach was complex enough to be relevant to the many different activities of different organisations, but also simple and straightforward to use. There are two issues to address to make a tool accessible.
First, the tool must be easy for front-line staff to use, and integrated with their day-to-day work. A key priority for CAADA’s Insights tool was ensuring it fitted with front-line practitioners’ existing case management process so that it didn’t become an additional burden.
Second, the tool must be accessible to service-users. The approaches we reviewed avoided jargon and used simple language where possible. Phrasing questions in the language of the service user is essential—as Alison Penny from the Child Bereavement Network told us, ‘we tried to use simple terms and plain language, we also used images on the children’s questionnaire to improve its accessibility.’
‘Citizens Advice offers a list of suggested outcomes, and bureaux are allowed to choose their own.’
A degree of flexibility in the shared measurement tool emerged as a consistent theme in the approaches we reviewed. Striking the balance between standardisation and flexibility is a challenge, and most of the approaches we reviewed dealt with it by allowing organisations flexibility in choosing outcomes, but limiting the measures in place once outcomes had been chosen.
For example, the Young Foundation was not very prescriptive in its use of outcomes in the young people’s framework—Bethia McNeill told us that ‘users can select clusters of outcomes that may be relevant. What we have is a common set of steps, rather than totally identical outcomes and methods.’ Similarly, Tamsin Shuker from Citizens Advice told us how allowing for flexibility in their system is crucial given the diversity of local Citizens Advice Bureaux (see case study on page 36). ‘Each bureau has different funders, and hence different requirements. What Citizens Advice offers is a list of suggested outcomes and bureaux are allowed to choose their own. In total, 700 outcome codes are offered, though in practice rarely more than 15 are applied at any time.’ Bond also spoke about the risk of being inflexible. The Bond approach identified shared outcome areas, and broke these into smaller thematic groups before looking at indicators to map onto outcomes. As Rob Lloyd said, ‘you can’t be too prescriptive, we’ve gone for having baskets of outcomes and indicators first.’
Many of our examples are at the stage where they are attempting both to scale up, and remain sustainable. Our research identified a number of common themes that interviewees referred to when thinking about these issues. Incorporating the use of technology seems to be key to scaling up, whether through a bespoke software system or an online platform. All developers spoke about the need to constantly refine the tool and respond to feedback from users. Sustainability is also a concern. Some approaches have begun to charge a minimal fee for the tool to cover costs, while others are considering pricing strategies. Long-term funding is key to allow time and flexibility to refine and pilot the tools.
‘A software solution is essential, this is the point where all organisations struggle.’
It is essential to incorporate measurement into a technology platform, for ease of use and access, but also to allow organisations to easily compare results. The more mature shared measurement approaches we reviewed have all developed some sort of software or online platform to enable users to access the tool. Triangle has developed a tailored software system for organisations to use the Outcomes Stars online. Joy MacKeith sees a software solution as ‘essential. This is the point where all organisations struggle—we never had plans to develop software for the tools, but did because IT was a real barrier.’
Many tools we reviewed have not yet reached the point of incorporating the use of technology—but most agreed that this was the next step they need to take to achieve scale.
Box 6: Online platforms for shared measurement
The technology needed to enable shared measurement is still a developing area. There are a number of notable online platforms that have been developed to allow charities to better manage their data, including data on outcomes and impact. These platforms could also be used to enable shared measurement.
The availability of appropriate technology is key to making shared tools more user-friendly, and data more accessible. This area is being explored by Substance as part of its research on data, tools and systems for Inspiring Impact.
‘There is a need for continuous refinement of the system as the environment is constantly changing.’
The need to continually refine tools and respond to feedback from the sector emerged as a consistent theme in our review. The Child Bereavement Network is already planning a refinement stage even though it is still piloting its tool: ‘Refining the tool is essential, this is one of its key objectives,’ says Alison Penny. Citizens Advice also spoke about the importance of ‘a flexible evolving system.’ In its tool, Tamsin Shuker explains, ‘revision of outcomes takes place every six months. There is a need for continuous refinement of the system as the environment is constantly changing.’
‘You need a business model, you will always need ongoing refinement, and therefore investment.’
Committed funding for a number of years to support the scaling up of and ongoing refinement of an approach is essential. Most of the approaches we reviewed had received funding for a period of three to five years to support a tool’s development and pilot. The Child Bereavement Network’s tool has three-year funding from a grant-making trust, as well as funding for a five-year PhD. CAADA received funding from a number of grant-making trusts to fund the initial development of Insights, and hopes to fund the expansion of the tool in the same way.
That said, a number of tools do not have long-term funding—something each developer saw as a barrier to scaling the tools. In most cases, the approaches need additional funding to develop online platforms or tailored software. Without this initial investment, developers will struggle to scale up the tools. Without this scale, the tools cannot generate earned income to remain sustainable. Committed funding is needed to bridge this gap from concept to sustainability.
Financial sustainability of the tools is likely to involve a pricing strategy to cover costs. Many of the approaches we reviewed have already developed a marketing and pricing strategy to ensure their tools’ growth. To generate income for its Value of Infrastructure Programmes (VIP) tool, which measures shared outcomes for infrastructure bodies, NCVO charges a basic fee for organisations to be trained in its use.
Triangle is scaling its Outcomes Stars, which are free to download and use to ensure widespread uptake, but charge a fee for use of online software. This is necessary to cover the ongoing cost of maintaining the tool and funding any development. ‘You need a business model,’ says Joy MacKeith. ‘Of course there is a difficulty between making the tools free and the need for sustainability, but you will always need ongoing refinement, and therefore investment.’
Our analysis explores a number of factors key to successful shared measurement. But there are also a number of challenges to overcome. Our interviewees spoke about the importance of addressing a number of difficult issues when developing their approaches:
Shared measurement has significant potential to bring about more efficient, consistent and collaborative measurement among charities and social enterprises. Our research identifies a number of key stages and success factors in developing shared measurement. Balance is a theme that runs throughout the different stages and success factors. It is essential that developers strike the balance between actively recruiting organisations to be involved, and ensuring they follow the sector’s lead. There is a balance to strike between a bottom-up approach and ensuring funders agree on what is measured. There is a tension between engaging a diverse range of stakeholders and making sure not to over-consult. There is a tension between ensuring tools are shared and standardised, but also allowing enough flexibility; and between developers wishing to scale the tool, but charging for its use to cover costs. Tensions like these are inevitable in a complex process like shared measurement. The potential benefits make it worth working to find this balance, and many of the organisations profiled here show that it can be done successfully, and serve to provide inspiration for others.
Triangle’s widely-used Outcomes Star is one of the sector’s best-known shared measurement systems. Almost 20% of the homelessness sector uses the Homelessness Star, and there are 15 other versions, carefully adapted for different client groups and services including older people, people with mental health problems, and vulnerable families.A decade ago, Triangle was commissioned by UK-based homelessness charity St Mungo’s to develop an outcomes measurement system. Extensive consultation took place with front-line staff and service users, overseen by a working group of managers. ‘We had a lot of freedom and flexibility to innovate rather than working to a rigid brief,’ says Joy MacKeith, co-director and co-founder of Triangle. After piloting at St Mungo’s the tool was tested and modified in other agencies, supported by the London Housing Foundation (LHF). It became clear that the tool was applicable across the sector, and in 2006 Triangle created the first version of the Outcomes Star, published by LHF as a sector-wide tool. Over time, the Star’s potential in other sectors became apparent—in 2007 the Mental Health Providers Forum commissioned Triangle to develop the Recovery Star. A bottom-up approach was crucial to development: ‘People mostly use the Star because they think it makes sense for them and is helpful, rather than because it has been imposed from above. A lot of the time they use it because it positively helps their work with service users—the outcomes information is a bonus.’ After four years Triangle developed the Star Online, a web-based system for completing and analysing Star data. Interestingly, Joy says, ‘We hadn’t planned to develop software, but did because so because IT was a real barrier.’Strong leadership played an important role in creating and managing a suite of sector-wide tools. ‘Someone needs to hold that independent perspective, a bigger picture agenda,’ says Joy. ‘Organisations often want to tweak the tool to fit their needs, but you lose consistency, comparability and risk undermining the quality of the tool, so saying no is part of the process. What’s best for an individual organisation is not always best for the sector.’ Partnerships have been key to the Star’s success, with versions funded by the Department of Health, BIG Lottery Fund, and Nesta, among others. Triangle has established partnerships to make the Star available in Australia, New Zealand, Denmark, Canada, Finland and Italy; and is working with academics to research the Star’s psychometric properties. Benchmarking and comparing results, however, remains a contentious issue: ‘you need to have confidence in your data to benchmark,’ says Joy. ‘You should never treat the data as the final answer on service effectiveness. We always say that it helps people to ask better questions.’
The paper version of the tool is available for organisations to download and try for free. ‘This has really helped with widespread adoption,’ says Joy. But to ensure financial sustainability, Triangle is charging fees for training, the online tool, and the incorporation of the tool into an organisation’s own IT and paperwork.
The Childhood Bereavement Network is the hub for organisations and individuals working with bereaved children, young people and their families across the UK. Part of the National Children’s Bureau, it is currently piloting standardised pre- and post-questionnaires for its members to use with the children, young people and parents they are supporting, to help them measure impact in a more meaningful way.The initial impetus for the framework came from research carried out by Dr Liz Rolls at the University of Gloucestershire, looking at evaluation activity in the child bereavement sector. This research highlighted that although significant time and resources had been spent on evaluation in the sector, there was a lack of tools measuring appropriate and relevant outcomes. It recommended collaboration in the field to develop a common ‘core’ outcomes framework across the sector, and determine ways to measure these outcomes. CBN secured initial funding from the Department of Health to develop the framework and propose measurement tools. CBN coordinator Alison Penny aimed to include as broad a group of stakeholders as possible in developing a theory of change framework. All 250+ CBN members were invited to take part, as well as groups of children, young people and their parents. ‘Maintaining diversity of participants was crucial,’ says Alison. ‘We set up six focus groups with practitioners, one with funders, two with parents and four with young people.’The focus groups helped identify the outcomes different stakeholders considered critical for child bereavement services to be working towards. Alison notes that ‘grouping the outcomes into sub-groups proved tricky . They are interconnected and many contribute to one another.’ Once the outcomes were identified, CBN realised they needed to develop their own tools to measure indicators of change in areas specific to child bereavement services.
The development process was iterative and took just over a year, including two consultation rounds with focus groups and several drafts. CBN ensured that questions were asked in plain English: ‘We tried to use simple terms and plain language,’ says Alison, ‘and we used images on the children’s questionnaire to improve its accessibility.’ The final questionnaires will be tested for reliability and validity to ensure questions are robust and sufficiently sensitive. CBN will invite organisations from its membership to participate in this pilot in early 2013.
Once the pilot is complete, training to use the questionnaires will be rolled out. Participating organisations will be provided with an information pack, opportunity to participate in workshops, and support in using the questionnaires and reporting on data. Alison is considering how best to encourage organisations to anonymise and share their data centrally. ‘This will help us to say something meaningful about the difference that the sector as a whole is making to the lives of bereaved children and young people.’ Beyond 2013, the medium-term outlook is promising. Funding from the True Colours Trust and a PhD scholarship will support the framework until at least 2017.
Capturing the impact of international development work is a challenge that Bond tries to address with its Improve It Framework. The UK membership body for international development NGOs began discussing how to develop better ways to measure its members’ impact, and the impact of the sector as a whole, in 2006. But it was the arrival of a new chief executive, Nick Roserveare, that really drove the process forward. Frustrated by many NGOs’ inability to provide evidence for their work, in 2008 Nick set up Bond’s effectiveness programme. Central to this is the Improve It Framework, a common set of outcomes and measurement tools for organisations to understand and measure their impact. Rob Lloyd, former head of effectiveness at Bond, recalls how ‘the sector was concerned that implementing new measurement systems could lose the complex reality of its work.’ The initiative didn’t take off until the Department for International Development (DfID) began prioritising evaluation, providing a clear incentive for NGOs to engage. Bond received funding from DfID and several NGOs to develop a prototype. Rob highlights the importance of good relationships with funders: ‘They are especially important in helping to disseminate the framework and ensuring measurement actually happens.’A major challenge in developing the framework was the diversity of organisations in the international development sector. Through discussion with NGOs, eight thematic areas of work were identified with five strategies of engagement or ways of working. Bond was careful not to break the sector down into too many meaningless fragments: ’This would have weakened comparability, and ultimately the value of the tool,’ says Rob. Each of the eight themes required a different set of outcomes and indicators. Comparison was possible for some, but Rob warns that ‘careful interpretation of results is crucial for it to be meaningful.’Bond initially developed the full framework for two sectors, testing and scaling it up before expansion to the others. Engagement with the organisations that would use it was a priority: ‘you need to design it for people who use it on a day-to-day basis.’ The development process featured months of consultation, with practitioners from different sectors sharing their experience of evaluation. ‘There’s no point in developing robust tools if you don’t bring along organisations,’ says Rob. ‘ That’s the problem with most common frameworks. That’s why buy-in is so important.’ While participation in developing the framework was open to all Bond members, it was initially difficult to recruit. Bond’s approach was to ‘go with where the energy was. It took time to win people over, but in the end, uptake was good. Being a sector body was important. It opened doors. You need trust and reach.’ At the time, Bond had a network of over 350 members. At one point, 25% of all members were involved. Having a core of committed members supported by a wider network helped to avoid overload.