Our journey towards equitable evaluations

Mariana Xavier

By Mariana Xavier

Why do we do evaluations? This, in essence, was the question I was pondering ahead of the 15º International Evaluation Seminar organised by Itau Social, in São Paulo, Brazil, where I was asked to talk about evaluation in service of social equity, from the perspective of a philanthropic foundation. As I found myself reading about Equitable Evaluation¹ for the first time, I felt this was an invitation to reflect on our journey on how we have conceptualised, implemented and used evaluations at Laudes Foundation, where I work as a programme manager – and what we have learnt so far about how they can (and should) contribute towards equity.

In philanthropy, evaluations are often commissioned to understand results and provide answers to questions like: "Is this project or initiative worth supporting?" or “Is this aligned to our mission?” The idea being to learn from evaluations about what works (and what does not) preferably in a way capable of providing lessons that can be generally applied to other contexts or communities. But over the last few years at Laudes Foundation, we started asking ourselves if the evaluations we were commissioning were actually helping to strengthen the projects and partner organisations we supported and how we supported them. Who was using these evaluations? What were they telling us and what more could they tell us? We saw the potential for evaluations as a tool to better understand our programmes and our relationship with our partners and to get a clearer view of the challenges that were being encountered at ground level.

We started to see that evaluations could have a different purpose. They could be more of a learning journey, an opportunity to really grapple with the issues that we and our partners deal with. We realised (after a couple of years of commissioning sometimes underused external evaluations) that as a foundation we should never be the only users (or even primary users) of an evaluation. The main reason why we should commission or conduct an evaluation is to generate learning that is useful for everyone involved, not only for ourselves as funders.

We now see an evaluation as a continuous learning process involving multiple voices and perspectives. Paraphrasing Jara Dean-Coffrey, director of the Equitable Evaluation Initiative, it should be about co-creating knowledge and honouring the knowledge of other people. We have come a long way since we started thinking differently about evaluations – and we are still learning.

 

Avoidable traps and lessons learnt so far:

1. The purpose of evaluations  

When I first came to the foundation three-and-a-half years ago (then it was still C&A Foundation), our approach was to evaluate our grant making as much as possible. As a recently consolidated foundation it seemed important to evaluate whether we were on the right path and to identify results, even if preliminary. Often, it felt like we partly had the intention of validating our grant-making decisions, sometimes to ourselves or our own governance.

As we moved towards changing our main purpose to maximising learning – for ourselves, our partners, other funders and civil society organisations – this changed the entire process. One thing we learnt, for example, was that co-designing the evaluation Terms of Reference and Plan with the implementing partner was vital. Nowadays, when hiring an external evaluation, we collectively think and discuss with our partners which evaluative questions should be prioritised, and what should be the focus of the evaluation. By doing this, we are optimising the chances of achieving an evaluation which will be useful to our partners and to others in the field.

In terms of evaluation design, we are also considering more flexible and dynamic methods, not necessarily at the end of an initiative, but also during it, allowing for real-time learning and adapting. The results that are generated in this way can help us to rethink and redesign the initiative while it is still happening. It is a much more effective way of promoting a learning culture and environment. 

2. Numbers are limiting 

Quantitative indicators may give a glimpse of reality, but most of the time they tell us little about the quality or the meaning of the changes we are looking at. That is why we have started to develop evaluation rubrics as a new tool, as it allows us to combine quantitative and qualitative evidence in a simplified way, and can be used as a basis for conversations with implementing partners throughout an initiative for joint analysis, to ensure there is shared meaning and learning behind the evidence collected. Our goal is to avoid the oversimplification of complex realities and contexts (including the effects of everchanging external contexts), to identify patterns of change throughout our grant making and to learn from and with partners along the way. This is also a way of decreasing the power balance that sometimes underlines the relationship of a funder organisation and the implementing partner of a programme.

We also came to see that in order to contribute towards equity and social justice, it is fundamental to take into consideration in any evaluation; data on gender, race/ethnicity and other social categories to gain insight on how they impact the roles, access to rights and opportunities, and power dynamics of those who are impacted by the grant. If an initiative is gender blind and socially neutral it may fail to address these dynamics by "treating everyone the same", causing inequalities to persist – or even be reinforced through our grant making.

At Laudes Foundation we have recently adopted an evaluative rubric on gender, equity and inclusion and have also incorporated this as a cross-cutting lens across other evaluative rubrics. The idea is to invite ourselves and our partners to always assess in what ways an initiative is contributing towards a socially inclusive or transformative environment by addressing power dynamics, norms, gender roles and other causes of inequality.

3. Promoting equity starts with ourselves

It may sound like a cliché, but if we want to see change in the world we must be open and ready to change ourselves first. In philanthropy, this means seeing ourselves as belonging to the system that needs to be changed, which means admitting we too are part of the problem, not only the solution. Since we are included in the larger socioeconomic system that reproduces inequity and social injustice of all kinds, and our decision-making influences how an initiative rolls out, it makes sense that when commissioning an external evaluation, our practices and strategies be evaluated as well. The idea is not to evaluate the partner or the grant – but to evaluate the partnership – beginning with ourselves.

What does this mean for the future? We hope that our evaluations will give us a clearer picture of what our work is achieving, not only by looking at specific outputs and outcomes chosen at the design phase of an initiative, but looking at the partnership and how our strategies affect it. We should consider how our strategies impact different groups of people and whether an initiative is addressing underlying systemic drivers of inequity.

We ought to be more open to different perspectives and voices, explore creative dynamics and continuous ways of evaluation with partner organisations, seeking multi-cultural validity and participant ownership. As we move towards more equitable evaluative thinking and practice, it’s important to acknowledge that this is a learning journey that takes us into unknown and sometimes uncertain territory, but it is a path we must take and as we cover more ground, it becomes clear that there can be no turning back. 

 


 

¹ Equitable Evaluation means aligning our practices with an equity approach — and even more powerfully, using evaluation as a tool for advancing equity. It means considering these four aspects: i) Diversity of our teams (beyond ethnic and cultural); ii) Cultural appropriateness and validity of our methods; iii) Ability of our designs to reveal structural and systems-level drivers of inequity; iv) Degree to which those affected by what is being evaluated have the power to shape and own how evaluation happens. (https://www.equitableeval.org/ee-framework)


About the author

By Mariana Xavier

Programme Manager at Laudes Foundation for the Labour Rights Programme.

Fashion
Share