How we’re thinking about our impact

In the third post of a blog series on our strategy and focus, we set out how we’re thinking about our impact in WASH, and the best way to understand and manage it.

Categories: Blog

August 01, 2018

In the past year, we’ve been considering the best way to understand and manage the impact of our WASH funding.

We want to understand this for a few reasons:

  • It’s the reason we exist. In my old job, we occasionally used the phrase “measure what you treasure”,  which—while trite—efficiently gets to an essential point: it’s the only way we know if we’re achieving our mission as an organisation.
  • It helps us to improve. As a funder, we want to support enterprises to flourish as one solution for the billions of people without access to high-quality WASH services. Impact data allows us to understand what works, to learn from that, and adapt our funding approach.
  • It allows us to be accountable. Both to our Board of Trustees, but also as a UK charity that is regulated by the Charity Commission.

At the same time, our impact is indirect. As a foundation, we live vicariously through our grantees—we’re not delivering WASH services ourselves.

Instead, our main activities are: selecting organisations to support; transferring and monitoring funds; commissioning research and other forms of support for our portfolio companies; talking to others in the sector to share what we do and learn from others.

Of course, we can understand how well we’re carrying out those activities, and it’s important to make sure that we’re doing them as best we can.

But only looking at that fails to tell us much about whether we are achieving our mission of “financially sustainable WASH enterprises delivering affordable, high quality, and reliable services that have a transformational impact on households in Africa and Asia”.

It is the organisations and initiatives we support that are achieving those results; they aren’t ours to claim.

This presents a couple of predicaments for the Foundation when it comes to thinking about understanding our impact:

  • Firstly, how confidently can we say that some of the results an enterprise achieves are down to the funding that we’ve provided?
  • And secondly, how can we articulate our overall impact as an organisation when our grantees all measure things differently?

We’ve decided not to get hung up on the first question.

We’re not convinced the effort of untangling the causal relationship between our funding and the results achieved tells us (or the world) very much. And, where we can, we provide unrestricted funding to an enterprise against a business plan, and so the idea of ‘claiming’ a portion of results doesn’t really make sense.

The second question is trickier.

For our own purposes, having grantees and investees report back against a single set of indicators would make the most sense. This would enable us to better articulate what the Foundation in aggregate has helped achieve.

Standardised metrics could also in theory help us to learn more as a funder, by being able to compare across enterprises and geographies.

But we also need to think of the costs of that approach.

Imposing standardised metrics on our grantees would mean they would need to adapt their systems to meet our requirements. And the metrics we pick (however well researched) may not make sense in a Ghanaian or Cambodian context, nor may they not provide enterprises with the kind of insights needed to improve their operations and services.

Our conclusion therefore is that our grantees’ data needs are more important than our own.

We think it is more important for the enterprises we fund to have appropriate and locally relevant metrics for their work—which would allow them to provide a better service to customers—than it is for us to be able to aggregate data.

We are lucky that we can take that decision. This is not an option for colleagues in bilateral and multilateral institutions who need to report back to ministers, aid watch dogs and the press about the numbers of “lives touched”.

That’s not to say that we don’t want to structure the data we collect at all.

Instead, we have tweaked the way we agree milestones for a grant or investment to ensure that there is at least one metric that links closely to the following aspects of our mission:

  • Scale of the operations, number of customers, market penetration.
  • Quality of the service or product.
  • Affordability of the service or product for underserved customers.
  • Financial sustainability of the enterprise.

The goals set in these milestones and how they are measured is up to grantees, and should reflect their priorities and internal measurement systems. We encourage enterprises to add additional milestones on other important aspects of their business, so we get a rounded view of the initiative and the complex environment in which they are operating.

We’re testing this approach out at the moment, and our experience so far has been positive. These four areas have relevance to all the enterprises in our portfolio, albeit in different ways.

We now report on these four metrics at every board meeting, using a dashboard and traffic light rating system to give a top line sense of how the portfolio overall is doing and identify areas where we could learn more.

No doubt our approach will continue to evolve. But we think this is a good place to start and balances our needs to collect data that is relevant to our mission, while at the same time ensuring that the enterprises we support continue to use the metrics most relevant to their day-to-day work.

Related Insights