Language:  English  Français  Español 

Outcome Mapping Practitioner Guide

Best-Fit of Monitoring Tools: Monitoring Capacity Development with OM instruments

Most development programs aim at developing capacities of local (national) organisations, networks or institutions. However, many programs, are planned with a ‘traditional’ result-chain model. The choice of the best adopted monitoring tool is essential for learning, managing and adapting project strategies. We adapted the concept of progress markers – which proved to be very useful.

Author: Daniel Roduner

Published: Monday 22 September 2014

Integrating progress markers for various dimensions of capacity development of local partners (Ministries, National Extension Service, Departmental Water-Services, Communities and NGOs) was an eye opener for all involved parties. A rather theoretic concept (capacity development) became palpable and measuring progress visible.

Introducing the concept of progress markers was only possible towards the end of the program lifetime – for various reasons: the program was conceived in a traditional result-chain manner; interest for new instruments was rather low; and the fear of the unknown (in this case: progress markers). However, the question of how to measure change in behaviour, i.e. capacities of delivering services for the rural poor, remained un-answered. The tools at hand (indicators on impact level) did not answer the questions raised.

The building process. The advantages of building progress markers during the program implementation phase (instead of at the very beginning) include the existing working relation and confidence existing between program staff and partners, and the possibility of appreciating already made progress (instead of only planning for the future). Building a base line (i.e. what capacities did we have 2 years ago?) and a set of progress markers (what did we improve since, and what could be improved realistically within the next years?) was the first and probably most important step in the process. For the first time, program staff and partners could appreciate changes in ‘capacities’.

Personal interviews and group reflections helped to give ‘values’ to the different progress markers; as these are on a qualitative scale, we opted for a 1-10 value chart (1 being the lowest, 10 being the highest grade; a system that is based on school grades in Bolivia and therefore easy to adopt).

The use of the information. The approach of ‘capacity development’ came alive, tangible, and therefore program partners and staff were able to look at progress and plan for future collaboration. We were able to discuss reasons, triggers and hinderers for some changes to happen. These included: political will, staff rotation, (un)appropriate support strategies (such as: on-the-job training; field visits; regional and international exchange visits; training-of-trainers).

The pressure of showing results on capacity development made the main challenge look weak: convincing program staff of using an unknown tool. Program partners were much more interested and easily motivated in using a new tool. The approach of using a ‘spiderweb’ and therefore a visualising tool was very helpful for creating a learning environment.

Our suggestions for other users. Don’t stick to any pre-defined tool; look out for the best tool, instrument or method for you needs. The best-fit approach proved useful. Building progress markers, a base line and reflecting on changes proves most valuable if done with all involved partners and program staff.

This nugget was applied in: Sustainable Agriculture Development Program, Bolivia

Related Practitioner Guide sections:

Latin America & Carribean Sub-Saharan Africa North Africa & Middle East South Asia South East Asia & Pacific Far-East Asia Eastern Europe & CIS (ex USSR) Western Europe North America & Canada Australasia