Monitoring and Evaluation Manuals

Monitoring and Evaluation Manuals


Monitoring is the systematic and routine collection of information from projects and programs for the below main purposes:

  • To learn from experiences to improve practices and activities in the future;
  • To have internal and external accountability of the resources used and the results obtained;
  • To take informed decisions on the future of the initiative;

    Evaluation is assessing, as systematically and objectively as possible, a completed project or program (or a phase of an ongoing project or program that has been completed). Evaluations appraise data and information that inform strategic decisions, thus improving the project or program in the future, the Evaluations process should assist OROKOM to conclude the results about the relevance, effectiveness, efficiency and impact of the intervention.



Types of data used in Monitoring & Evaluation Actions:

  • Quantitative data : quantitative data is statistical and is typically structured in nature – meaning it is more rigid and defined. This type of data is measured using numbers and values, which makes it a more suitable candidate for data analysis.
  • Qualitative data : Qualitative data is non-statistical and is typically unstructured or semi-structured in nature.

This data isn’t necessarily measured using hard numbers used to develop graphs and charts. Instead, it is categorized based on properties, attributes, labels, and other identifiers.

Data Collection and Analysis Plan:

OROKOM is expanding the data collection and analysis plan on the information collected by describing in detail how data and information will be defined, collected, organized, and analyzed. Typically, this plan consists of a detailed narrative that explains how each type of data will be collected along with all the steps needed to ensure quality data and sound research practices. Key components of this plan include: the unit of analysis; the link between indicators, variables and questionnaires; the sampling frame and methodology; timing and mode of data collection; research staff responsibilities, training, and supervision; fieldwork, timing and logistics; checks for data quality; data entry and storage; hypothesized relationships among the variables; and data analysis methods. Special analyses, such as disaggregating data by gender, age, location and socio-economic status, should also be described.

It is important to provide the rationale for the data collection and analysis methods. This includes the triangulation of methods (quantitative and/ or qualitative) and sources to reduce bias and ensure data reliability and completeness. It should also be informed by the standards that guide good practice of project evaluation. There are many useful resources in the evaluation community that identify key principles to ensure ethical, accountable, and quality evaluations.


Major sources of data and information for project monitoring and evaluation include:


  • Secondary data. Useful information can be obtained from other research, such as surveys and other studies previously conducted or planned at a time consistent with the project’s M&E needs, in-depth assessments, and project reports. Secondary data sources include government planning departments, university or research centers, international agencies, other projects/programs working in the area, and financial institutions.
  • Sample surveys. A survey based on a random sample taken from the beneficiaries or target audience of the project is usually the best source of data on project outcomes and effects. Although surveys are laborious and costly, they provide more objective data than qualitative methods. Many donors expect baseline and end line surveys to be done if the project is large and alternative data are unavailable.
  • Project output data. Most projects collect data on their various activities, such as number of people served, and number of items distributed.
  • Qualitative studies. Qualitative methods that are widely used in project design and assessment are: participatory rapid appraisal, mapping, interviews, focus group discussions, and observation.
  • Checklists. A systematic review of specific project components can be useful in setting benchmark standards and establishing periodic measures of improvement.



Log frame or Logical Framework

Before designing the M&E plan, OROKOM is following the log frame or logical framework which shows the conceptual foundation upon which the project’s M&E system is built. Basically, the log frame is a matrix that specifies what the project is intended to achieve (objectives) and how this achievement will be measured (indicators). It is essential to understand the differences between project inputs, outputs, outcomes, and impact.

M&E planning. Ultimately, it will inform the key questions that will guide the evaluation of project processes and impacts through clarifying the below points:

  • Goal: To what extent has the project contributed towards its longer-term goals (Impact) why or why not? What unanticipated positive or negative consequences did the project have? Why did they arise?
  • Outcomes: What changes have occurred as a result of the outputs and to what extent are these likely to contribute towards the project purpose and desired impact? Has the project achieved the changes for which it can realistically be held accountable?
  • Outputs: What direct tangible products or services has the project delivered as a result of activities?
  • Activities: Have planned activities been completed on time and within the budget? What unplanned activities have been completed?


  • Inputs: Are the resources being used efficiently?



  • Any project/ program designed by OROKOM should have an M&E plan.
  • M&E plan should be created during the planning phase of a project or a program.
  • OROKOM standard M&E plan contains specific result – performance (activity) ‐ based indicators with baselines and targets, means of verification, frequency of data collection, responsible staff, and type of reporting.
  • In general, OROKOM prefers M&E plans with a robust set of indicators that measure program progress and impact of the program activities. While it is not necessary to have indicators for every program activity, the indicators should measure the major program activities that will contribute to the advancement of the strategic objectives as laid out in the grant agreement.
  • OROKOM recognizes that sometimes it may be difficult for Project/ Program Managers to design an M&E plan using OROKOM standard M&E plan; nevertheless, Project/ Program Managers are encouraged to develop an M&E plan that is as comprehensive, ambitious and creative as possible.


  • OROKOM encourages Project/ Program Managers to provide success stories and anecdotal or other qualitative evidence of program impact in the program progress reports, as well as showing how well the program is meeting the targets set in the M&E plan.



Standard M&E Plan

Below, the M&E Plan Template that following by OROKOM in its work:


Result statement Performance indicator/s Baseline Target Means of Verification Frequency Responsible Reporting


Please note that some parts of the above format are changeable from project to the project based on the discussion with the donor.