24 March 2015
Last week, I took part in a workshop discussing “Impact Assessment for Data training”, with a number of different practitioners working on some aspect of improving data literacy in their communities.
From a personal perspective, it was fascinating to see that the priorities of people there were largely similar: everyone was working (in some way) on improving data skills among different groups of people who were well placed to then use those skills to push for social change. In some countries, the target group for this was university students; in others, the focus was activists, or young people, but the overall aim was the same; to empower them with the skills they need to achieve their own goals, better.
An interesting exercise asked us to define what we actually meant by ‘impact assessment’ - we realised that for many of us, it was a way to work out whether the interventions we were leading, were having a positive or negative effect on the communities we’re working with. Most of the time, though, these impact assessments act as feedback to donors for the said activity - so I do wonder what it would take, or how often it comes about, that an impact assessment carried out internally actually reveals anything but a positive outcome.
It’s been talked about many times before, but in the environment in which we work, failure is not well received. An admission that a project didn’t quite work is seen as something to be hidden or disguised, not something to be assessed and written about publicly. That said, more and more events I’ve been to over the past year or so have featured some sort of ‘failure fest’ - a place for people to talk about what worked and what didn’t, in an open and honest way. Normally, these are done as a confidential activity, with those stories being told on the condition that they don’t go any further; it’s a good first step, but it really is only a first step; sharing these stories publicly, without fear of negative consequences, would be an excellent next step.
A perhaps more revealing exercise asked us to define what the elements were of successful impact assessments that we had seen or carried out ourselves. The admission: many of us simply weren’t engaged in carrying out impact assessments ourselves, despite knowing that assessing what we’re doing is a crucial part of improving and iterating upon our projects and programmes.
The reasons for this are, in some ways, very simple: we don’t have the resources, whether that be time within the project to conduct an impact assessment, or enough people to do this while carrying out all of the other activities that we have to do. We find ourselves with other, more pressing priorities; ones that reach more people, or ones that seem to have a more positive short term effect.
The solutions to this problem, are less so. For example, and this was suggested by a workshop participant: asking donors to commit a certain proportion of any funding to impact assessment. Having it included from the very inception of the programme as a key component would solve those resource issues - but then we’re left with the design of the impact assessment to think about.
Another potential problem on the road to ‘assessing impact’ of many of the projects we do, is the longevity of those projects, and the very differing geographic and cultural contexts in which we work. We imagined that the best kind of situation for a proper impact assessment would be a long term project, of 2-3 years, carried out with the same community, on a regular basis. In reality, however, this kind of situation very, very rarely happens. Typically, we work with different people in different countries, different contexts, over the period of usually a few months maximum. Attributing any change that might happen among that community to the trainings we’ve done, seems a little premature, to say the least, and the huge differences between those communities make transferring learnings between them fairly difficult (though not impossible).
One example could be the Fellowships programme that I’m leading at the moment with School of Data. We ran a 6 month programme last year with 12 fellows in 11 countries, and this year it will be a 9 month programme, with 7 fellows in 6 countries. We’re picking fellows based on the strength of their applications, not based on their geographic location, which means that we may well not work in the same countries as last year. We also have very few resources to continue working with the 2014 fellows to assess their ongoing work, meaning that any long-term impact that they might lead in their own communities likely won’t get “officially” recorded.
Recognising the need to step back from sometimes frantic project delivery, and stop and think about what we’re doing and why, is a crucial part of making any of these efforts actually useful to the communities we’re working in. It’s part of the reason I stepped back from full time work in NGOs after almost 5 years of project-based work, the reason I love reading about other people’s learnings, and the reason why I’m taking the time to document my learnings as I go along as a consultant in this space.
I appreciate greatly that many NGOs seem to be moving towards research work and documentation - MySociety are holding their first conference this week, on the Impact of Civic Tech; Publish What You Fund recently advertised for a Research and Monitoring Manager; and the upcoming International Open Data Con will feature an Open Data Research Symposium, to name just a few great developments.
Seemingly, donors are indeed willing to fund activities that take a more thoughtful perspective on the work we’re doing in this space; I can only encourage people to push for this within their own organisations too, to take the time to think about why we’re doing what we’re doing, and to be as frank as we can in sharing what’s going well, and what’s going wrong.
Thanks to Dirk Slater for some fabulous facilitation of the workshop, and to all the participants for sharing their learnings in a frank and honest way!