Shehnaaz Latif, Lead Consultant, NCVO Charities Evaluation Services, shares her tips on how organisations can work with funders to maximise the impact of their work.
When thinking about collecting data, I have found it helpful to facilitate discussions about the types of data that could be collected and how it might be used, before thinking about what MUST be collected. Often the types of data we are keen to collect tell us more about who is being reached, to what extent they are engaged in the service or activity and what they think of those services or activities. Many times this ‘output’ data is easy to collect, measurable and shows how engaged users are.
However, this does not reveal the extent to which the work has actually made a difference. When thinking about how to assess the impact of your work, the five types of data can be helpful to structure your thinking. Outcomes data is crucial for shining a light on whether or not funders’ money has made a difference, by showing the short-term benefits that someone gains from accessing the support. In some situations organisations can go further and look at the long-term difference to an individuals life. This is the hardest data to collect and many organisations do not need to or should not collect this data and should focus instead on the other types of data outlined.
Funders often wonder whether they should prescribe indicators against which evidence should be collected by grantees or whether this should be left to those who do the work and often know best how to assess it. In my experience, it’s best to be collaborative in the approach to setting indicators.
For example, in collected outcomes data:
- negotiate around outcomes (‘here’s our theory of change, let’s compare it with your funder theory of change’).
- agree on the priority outcomes against which data should ideally be collected.
- discuss the nature or standard of evidence that you will find acceptable.
Selecting indicators is a ‘political’ activity because you’re deciding the criteria upon which the work will be judged. Some of us are more convinced by statistics and others prefer stories. Some prefer objective measures and some value the subjective view. Ideally, a mix of all types of evidence builds the best picture that will address different preferences.
The best ways to share results
The results of good impact practice are only useful if they are shared, read and learnt from. In a recent conversation with a group of funders, we discussed the best way to present evaluation results and debated the use of a traditional (often long) reports. There are alternatives to written reports, for example, holding learning events where groups speak about what they have done, the results and the lessons from their work. We don’t always have to share good news indeed there is growing interest in ‘failure reporting’.
A while ago I challenged a client to write up her evaluation results as an executive summary first before seeing whether a full report was needed. She found she could convey the key findings in the four-page summary and there was no need for a longer report.
Whether we present results as a summary, a report, an event, or all three, the priority is that learning is channelled into improving the decision-making for future funding and programme design.
It is my hope that funders will boldly share with others, not only their impact results but also their impact practice story, perhaps through formal networks (eg, ACF, London Funders) or informally at roundtables, meetups (one of which we facilitated at NCVO in mid-July), and more widely through blogs. This will enable those to understand others in the funder universe.
To find out how you’re doing on impact practice check out Inspiring Impact’s free online self-assessment Measuring up! for funders. Some simple principles and drivers of good impact practice, developed by funders for funders, are also available on the Inspiring Impact website.