Effective outcomes measurement starts with purpose, not data

When I work with changemakers on outcomes measurement, there’s often an urge to jump straight into the data talk: what to collect, how to collect it, and how to analyse it. But for data to support social impact, it needs to sit within a system that’s built around purpose and people.

A service or initiative should be designed with outcomes in mind that are meaningful to those it aims to benefit. When delivered, the evidence might show significant change in some areas and little or none in others. That’s a valuable opportunity to improve the design so it works better, and to consider other parts of the system that might be influencing the results. This kind of reflective practice shifts teams away from activity-based work and encourages curiosity and innovation. It also opens space for participants, the people experiencing the change or benefitting from the service, to have a say in shaping the service.

Think of this as a system that connects purpose, design, delivery, measurement and improvement. Here are some thoughts about what that means in practice: 

Theory of change – map your logic

Why does your organisation exist and what change do you aim to drive? Capture that logic in a theory of change. Name the problem, the people, the route from activity to outcome, and the partners you need on the journey. Make it a living thing, use it often, share it, talk about it. Fix whatever feels clunky to make it meaningful to you and your stakeholders.

Programme – design with intent

How does your wider purpose translate into the design of specific services or interventions? A programme is more than a list of activities. Clarify the change you seek, the people you serve, and why your approach should work. How do you know that is the most effective way do to it? Bring people along and learn from those with lived experience. Remember that community contributions to design are work, make sure to compensate it fairly. 

Outputs – but don’t just count 

When tracking outputs, don’t do just headcounts. Confirm the right people are coming through the door and if not, what needs to happen for them to come. Do track volume and reach but avoid treating counts as proof of success. Think back to purpose to define what numbers mean success. Some initiatives need to scale deep, not wide. 

 Outcomes – the powerhouse of measurement

Outcomes are powerful. Big systems change is sexy and inspiring, but outcomes are closer to your actions. They capture the changes you helped drive directly. Because they are connected with your intervention design, they have great value to understand what is landing well and what needs to be improved. And then do it again, and again. Over time, data stacks up and patterns emerge, which you can later explore in more depth with evaluation and impact analysis.
Outcomes can be short term (shifts in confidence, knowledge, attitudes, practice). These you can usually attribute with some confidence to the work you did alongside participants. Then there are medium term outcomes, the effect of those changes had in whānau (families), workplace, community, etc. Even though there may be additional factors at play, you still know if you had a role to play because people can tell you directly. You don’t have to make too many assumptions. Medium‑term outcomes test staying power. They answer “does the change stick when the programme ends?”

 Impact – system‑level shifts

Impact is the wider change that happens when many forces pull in the same direction.  No single organisation controls it, yet each can contribute work and evidence that contributes to collective progress.
Even though this layer of change is beyond your direct control, there are practical things you can do that bring an impact and systems lens: Map the organisations that are doing work close to you, and find opportunities for collaboration. Consider what other forces are influencing the problem you are addressing from areas you are not directly influencing and keep them in mind for better design.

Good evidence is relational 

Robust evidence doesn’t come from numbers, but from trust and relationships that must be honoured. Treat every data point as a gift. Feed insights back in ways that improve services and adds value for those who shared their stories and experiences. People are never the object of change, they are partners and their data should be treated as such.

Finally, the insights you will get are only as good as the quality of your data. Here are some questions I keep handy for quality control. 

What are we measuring?
Are we measuring what matters to our organisation? Is the data meaningfully connected with our purpose? If you don’t have in-house expertise, bring someone along to help you design data collection tools that are fit for purpose. 

Why it matters:
If the data isn’t aligned with your purpose, it’s just noise. Focus on indicators that reflect your goals and the outcomes you care about not just what’s easy to count. This helps ensure your measurement adds value and not just work.


Why are we measuring?
Are we measuring to learn and improve our mahi (work) or only to comply with requirements?
What consequence does that intention have?

Why it matters:
If measurement is only done with the purpose of harvesting success stories, you miss out on the opportunity to learn and improve. Many funders appreciate seeing a system that learns more than a glossy report.


How are we measuring?
Do we have the right tools? What is clunky or time consuming? Are we asking the right questions to the right people? Is the data fit for purpose?

Why it matters:
The wrong tools can drain energy and skew results. Choose methods that suit your work and your team’s capacity. Simple, relevant tools often work better than complex ones you can’t maintain.


Is the data meaningful for our people?
Who is collecting the data? Do they see value in the data they collect? Who are we collecting data from? Do they see how the data they gift benefits them?

Why it matters:
If staff and participants don’t see the point, data quality and trust suffer. What would you do if someone told you “Hey, can you fill in this survey? It’s just something management is asking for.” The responses would probably not carry much meaning. Make sure everyone involved knows why the data matters and how it’s used and that they see the benefit for their own work. Respectful, reciprocal processes build stronger relationships and better insights.


Does it tell our story?
Are we sharing data in a way that brings people on board? What does that mean for different stakeholders?

Why it matters:
Data should help people understand your work and why it matters. It should bring them along for your learning journey. Include the success and the complexity. Make sure you are positioning communities and participants as partners, not objects of change.

Summing up.

Effective measurement starts and ends with purpose. When learning is the driver for measuring, data improves services and brings value to the organisation and the people it serves. Funders and other stakeholders can be brought along on the learning journey and see progress, improvement, and authentic change, rather than glossy success stories stripped of the messy bits that make them real.

You must be logged in to post a comment.