Within international development and elsewhere, organisations are moving towards “evidence-based decision making”, or “data-driven” decision making, as a step towards more responsible and effective development programming. Commitments to these kinds of process are, generally, celebrated as an acknowledgement that development needs to take into account what has come before, and react in an iterative and progressive way.

However, I can’t help but feeling that lauding ‘evidence-based decision making’ without questioning how it is being implemented, is perhaps a little naive, and needs to be more nuanced in order to be truly effective. Essentially, there are two main problems that I see.

Firstly, the assumption that we humans are rational beings, who upon being given new information, will act logically as a result.

Secondly, that “evidence”, or “data”, are essentially “truth”, and that making decisions based upon these is unquestionably a route towards better decision making.

To unpack my first point: there are innumerable examples of how human beings have been provided with facts, and not reacted in a logical way as a result. If we were rational beings, nobody would be smoking anymore, because we would recognise that smoking has irreversibly harmful effects on our health. We would always, if given the choice, take the train rather than fly, because we would understand that our increasing carbon footprint is already causing millions of people to become “climate change refugees” and be forced out of their homes. Policy makers would do everything they could to reduce our carbon footprint, and recognise that climate change is a potentially catastrophic issue for billions around the world.

But, we know that they don’t. And why not?

Because all of the evidence in the world does not diminish from the fact that everything is connected, and no single decision is an isolated event.

So a single policy maker might desperately want to act in a responsible and logical way to evidence relating to climate change; however, she can’t, because of internal organisational pressures and strategies.

Or let’s say that a programme manager receives evidence to say that a project is developing in an unforeseen, and perhaps even slightly dangerous way; does she cut the funding immediately, and end the project? What would that look like to those higher up in the organisation, who might interpret it as a lack of proper management on her part? Or does she receive the evidence, and actually decide that the best course of action would be to continue the project, because the cost and impact of cancelling it would, in her mind, be much greater.

In both of these cases, individuals are acting against their best judgement, because of external pressures. In an ideal world, this would happen rarely if at all. In the real world, this happens all the time.

To go on to my second point: no data is neutral. Evidence, data and even “facts” have within them a multitude of biases, and to accept them without questioning is at best an injustice, and worst, is causing harm to those involved.

Especially within the cultural minefield of international development, biases within ‘evaluation’ or ‘evidence’ drawn from projects are rife. Some simply can’t be helped, but require people using that evidence to do what they can to mitigate the biases. Others are a result of poor programme management.

An example (drawn from a combination of real-life anecdotes): a project is designed to provide microcredit to women living in rural areas. The donor organisation, in partnership with a local bank and a local NGO, sets up the system, which is lending only to women to support gender empowerment in the community. A number of women sign up, and the project seems to be running smoothly.

A year after the start of the project, people from the donor organisation come to visit one of the villages. They bring with them a woman from the local NGO who has been working closely with the community to act as an interpreter, both linguistic and culturally speaking. Together, they visit different houses, and at the end of the day, call a community meeting. The women, both individually and in the group, are asked how the project is going, and they say only positive things about the effect that having access to microcredit has had upon their and their families’ lives. The donor representatives go away satisfied, having documented and noted down the positive impact. Their report is then passed on to others in their organisation as evidence of a successful project, and based on the donor’s new push for “evidence based decision making”, more microcredit projects are set up in other rural areas.

However, it turns out that actually, there has been a high increase in domestic violence in the villages which have ‘benefited’ from the microcredit scheme. As only women are granted access to the credit, their husbands are feeling threatened, and within the strongly patriarchal society, they are expressing this fear and worry by forcibly taking the money from their wives. The women feel ashamed by this violence, and are afraid that speaking out will only cause them more problems at home; the worst thing that could happen would be that their new source of income suddenly disappears.

So they don’t say anything when asked, and hide their bruises; they want, and need, the programme to continue. The only way that the local NGO finds out is through one individual who chatted with an unmarried member of the community, who isn’t experiencing the violence herself but has heard rumours of it; for it to be taken seriously among both the local NGO and the donor, however, it needs to be said on record, but this is not going to happen. The project will continue and grow, and be touted as a huge success in the donors’ Annual Report.

This is just one example of how “evidence” can be biased; but what about “data”? It can be harder to understand biases within numbers in a spreadsheet, but they are definitely possible. Think about, for example, the questions asked in surveys; how are they structured? Do they lead the participant to a certain answer, perhaps to please the person asking, or to increase their chances of getting further financial support? Especially when the person collecting the data is coming from a whole different cultural perspective, the chances of inserting their own biases or assumptions into that data are considerable.

All this to say: when we’re celebrating that organisations have adopted evidence-based decision making processes, let’s not forget that this is the first step of many required in actually using data and evidence to have a positive impact on people’s lives.

Here are some further steps that come to mind, when engaging with “evidence-based” decision making: (in all cases, ‘evidence’ and ‘data’ are interchangeable)

  • Think about the data you’re given. Think about where it came from, and why it was collected, and what biases it might include as a result of all of those. (Eg. was it collected as part of an internal “impact assessment”, which is inherently designed and aimed at demonstrating positive impact? Or was it done by a funder, as part of a justification towards a new strategy?)

  • Can you compare this data to other similar projects, and see if there are any big differences? If there are any significant differences, can you work out why?

  • Do you have access to the rawest form of the data possible? Wherever possible, don’t rely on someone else’s interpretation of a dataset; draw your own. Here, some basic data literacy skills are necessary - they are not hard to learn though, and can make a world of difference to your work (and your impact on those around you)

  • Where you are pressured to take someone else’s conclusions, can you follow the steps they did to get there? If they have been analysing data well, they will have documented this process; you don’t need to be able to understand what exactly they did, but being able to see the steps they took is a good indicator.

  • Can you speak to someone who was actually involved in the project? Can they verify that the project took place (at least, from their perspective) as documented in the resulting data and project report?