Language:  English  Français  Español 

Outcome Mapping Practitioner Guide

Building Progress Markers into a Logical Model framework

Using Progress Markers with the logic model to build a toolbox for project monitoring.

Author: Kaia Ambrose

Published: Thursday 14 August 2014

After developing 4 maternal health projects with the logframe, and a corresponding Performance Measurement Framework, program managers felt they were going to be data-poor on outcomes.  Most of the outcomes were linked with indicators that are more realistically measured at baseline and endline i.e. increase in exclusive breastfeeding.  However, the logic model didn’t give the program managers or the project staff an idea of how different actors were contributing to exclusive breastfeeding, or the journey a mother takes to get to exclusive breastfeeding and the support and challenges she faces along the way; when they set up their monitoring systems, they were exclusively focused on output data (how many participants at a training workshop, how many seeds distributed, how many mother-to-mother groups set up), which are useful for giving a dashboard of project activities, but not useful for telling how improved nutrition practices are happening, and why or why not they are happening, and thus, how does project delivery need to improve / adapt in order to account for opportunities and barriers.  Enter Outcome Mapping, and especially the concepts of Boundary Partners and Progress Markers.

Step 1: The project team did an actor analysis, narrowing that down to identified Boundary Partners that they would track because they determined they had a role to play in exclusive breastfeeding (among other maternal and child nutrition pieces of the puzzle).    So for example, under the logic model outcome “improved nutritional practices among vulnerable women and men in target districts in Malawi”, whose indicators relate to WHO standards of infant and child nutrition.  What the project needs to know though are the ways in which different actors have a role to play in nutrition, and the opportunities and barriers they create in regards to child nutrition, and the pathways they could be taking to support nutrition.

Step 2: After Boundary Partners were identified, Progress Marker sets were developed for each Boundary Partner.  While this was an important exercise to further unpack and understand the project, and even touch on the project’s theory of change, the more important exercise was using the progress markers to set up monitoring tools.  There were three tools developed:

  • A survey that incorporated some progress markers and some more of the ‘technical’ and ‘bigger’ logic model indicators, to be delivered with a small sample size of the boundary partners every six months.
  • Observation Journals based on Progress Makers, for field staff to use (and to be reflected upon every six months as a team).
  • An interview guide, called “Rolling Profiles” targeting the same person every six months to track change across time and cross check with the Progress Markers.


  • The Progress Marker Observation Journals and the Rolling Profiles both rely on qualitative data, which is a new, and frankly unknown and even frightening, concept to many of the teams. Having been accustomed to output-level checklists and attendance sheets and quantitative tracking tools for many years, qualitative monitoring was a mystery – both in the collection process as well as in the analysis.
  • Small sample sizes for monitoring – given the nature of the tools (qualitative) and availability of human resources.
  • Sense-making / interpretation – the projects still need a purposeful, facilitated space, with a clear agenda, to be able to interpret the data (and cross-check the data coming from the tools).


  • Coaching on M&E is a vital aspect of Program Manager (and M&E Advisor) work; yet this becomes difficult across oceans with only occasional field visits. We have tried to remedy this with one-on-one skype calls, yearly learning workshops, and monthly ‘M&E newsletters’ (with tips, tricks, words of encouragement, etc.) 
  • OM won’t work if you haven’t thought through data flow: how will it get from the actual collection – whether it’s an interview or observation – to a database, and what will that data base look like? And then what will happen with it? How will it get aggregated, across time? How will it be interpreted? How will it be used? Setting up databases is vital to the process.

This nugget was applied in: CARE Canada Maternal, Newborn and Child Health programming in Malawi, Tanzania, Zimbabwe and Ethiopia.

Related Practitioner Guide sections:

Latin America & Carribean Sub-Saharan Africa North Africa & Middle East South Asia South East Asia & Pacific Far-East Asia Eastern Europe & CIS (ex USSR) Western Europe North America & Canada Australasia