In a constant effort to match the evolutions of the humanitarian space in general, and to collect and disseminate the most essential lessons learnt, the major international actors and stakeholders have increasingly felt the need to improve the practical value of the findings, the inter-agency cohesion, and the overall accountability of the evaluation and monitoring process. Methodologies and approaches have been strengthened, though much is still to be achieved, especially in the field of usable benchmarks, standards and indicators.

 

A more widespread use of project cycle management and logical framework analysis has also contributed to acknowledge the fact that evaluations are a key element in a continuous learning process, feeding into decision-making policies. However, contrary to the rather mechanical task of monitoring specific activities, or to financial audits based on well-established facts and figures, evaluations cannot present guarantees of complete objectivity. To evaluate is to compare, which is usually done by the evaluator on the basis of his/her own background, professional experience, and judgement capacity.

Hence, the importance of making the evaluation process as standardised, transparent and focused as possible, by using well-defined criteria and results-oriented approaches. Prolog has worked at improving the systematic use of qualitative and quantitative tools for collecting and analysing evaluation findings, and has developed a number of corresponding matrices and question frames. The aim is to further strengthen:

* linkages between evaluation objectives and corresponding criteria or indicators;

* practical utility of questionnaires in the field;

* clear rating of key conclusions, and their linkage to corresponding recommendations.

The approach is based on the OECD/DAC criteria for evaluation and follows closely the continuous lesson learning process initiated by ALNAP, of which Prolog is an observer member.