Utilisation Focused Evaluation

From Fmfi
Jump to navigation Jump to search

The main emphasis of the FMFI project is to conduct research in cooperation with a range of partners, to find innovative solutions to connectivity problems in deep rural settings, to document these and to disseminate the results so that other practitioners could benefit from the lessons learnt. The focus in the monitoring and evaluation activities in the project therefore is on utilisation. Michael Quinn Patton’s utilisation-focused evaluation (U-FE) framework has direct relevance in terms of intended use by intended users of the lessons learnt in the FMFI projects. Extracts of the framework are given here to be applied in the FMFI project.

Utilization-Focused Evaluation (U-FE) begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration of how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings and experience the evaluation process. Therefore, the focus in utilization-focused evaluation is on intended use by intended users. Since no evaluation can be value-free, utilization-focused evaluation answers the question of whose values will frame the evaluation by working with clearly identified, primary intended users who have responsibility to apply evaluation findings and implement recommendations.

Utilization-focused evaluation does not advocate any particular evaluation content, model, method, theory, or even use. Rather, it is a process for helping primary intended users select the most appropriate content, model, methods, theory, and uses for their particular situation. Situational responsiveness guides the interactive process between evaluator and primary intended users. A utilization-focused evaluation can include any evaluative purpose (formative, summative, developmental), any kind of data (quantitative, qualitative, mixed), any kind of design (e.g., naturalistic, experimental), and any kind of focus (processes, outcomes, impacts, costs, and cost-benefit, among many possibilities). Utilization-focused evaluation is a process for making decisions about these issues in collaboration with an identified group of primary users focusing on their intended uses of evaluation.(1)

The U-FE checklist is used as a framework for discussing the FMFI projects. The primary U-FE tasks and their associated facilitation challenges are considered, as well as the underlying premises. The 12 parts of the checklist are given, with brief comments on the initial status of each part, as well as indicating where more work needs to be done for the FMFI project to conform to a full U-FE framework.

Program/Organizational Readiness Assessment

The commitment of the primary evaluation client (IDRC) to doing useful evaluation is in place as well as the allocation of time and resources to this activity. The IDRC and the CSIR have assessed various stakeholder constituencies to select primary intended users of the evaluation, but some work has to be done to assess what is needed to extend the readiness to the actual use by intended users. This is the main facilitation challenge and will be taken further in Parts 10, 11 and 12.

Evaluator Readiness and Capability Assessment

The evaluation responsibility lies with the Meraka Institute of the CSIR, which has the resources and expertise in place to match their commitment to evaluation with the challenges of the situation to the level of making themselves accountable for facilitating the use of the results by intended users.

Identification of Primary Intended Users

According to the Outcome Mapping methodology, the identification of Boundary Partners is an integral part of all the projects in the FMFI fold. The identification of primary intended users within the projects and those external to the projects is built into the project design. These match the general requirements of PIU’s. As the emphasis in OM is on influence and changed behaviour, direct CSIR engagement with the project partners has been in place from the outset to get to grips with the evolving project level utilisation, as well as using the emerging results to maintain an ongoing dialogue at the application level to gather evidence for policy influence. Through advocating openness, promoting increasing interest, opening up networking possibilities and building the capacity of partners, the CSIR has cultivated and sustained credibility with its partners and facilitated a process where intended users are willing participants.

Situational Analysis

The CSIR has a substantial track record in the management of evaluation and is familiar with the facilitating the use of research results, dealing with the potential barriers that may be encountered, the political context in which use is advocated, how to engage with a broad range of stakeholders with different capabilities and ultimately how to respond to unanticipated situations, while staying focused on intended use by intended users.

Identification of Primary Intended Uses

At the project level the CSIR assisted primary intended users in dealing with formative evaluation and making appropriate changes to facilitate programme improvement. In some of the projects, monitoring and evaluation efforts revealed summative information, leading to major decisions influencing use beyond the lifetime of the project and contributing to the overall knowledge in the field. Ongoing discussion of evaluation was an integral component of the relationship with project partners and PIU’s, with considerable learning taking place from process uses.

Focusing the Evaluation

Priority questions are found in various places in the original proposal guided by the five primary indicators and leading to six expected uses. The appropriateness of the indicators in relation to the expected uses will be considered as well as the extent to which facilitation of the buy-in from intended users was achieved.

Evaluation Design

The evaluation design is a complex situation spanning two levels of implementation in which the overall programme objectives, indicators and expected outputs need to be matched with project level intentions. Here some work needs to be done on both the design and the facilitation process with the PIU’s. Appropriate questions have to be formulated for both levels for results to be meaningful in terms of intended use by intended users.


Simulation of Use

The evaluation leans towards qualitative aspects, with some quantitative elements. Research questions centred on “What?” and “How?”, with the question of “How much?” to be revealed in the detail of each project. The expectations of outcomes therefore were in terms of researching what works and what does not work. There was no firm baseline for proposing any extremes that could lead to making decisions on either what should be pursued aggressively or abandoned before the start. Evaluation costs were built into the project proposal and cascaded downwards to the project partners, creating the resources for the collecting data and debating its significance and potential use during the project and at the end. The simulation for use consisted of maintaining the dialogue on the main elements of each project, providing a framework that would focus attention on achieving changes in behaviour, leading to evidence of what could be used and by whom.

Data Collection

In the Outcome Mapping framework, the Intentional Design stage helps a program clarify and reach consensus on the macro-level changes it would like to support and to plan the strategies it will use. The second stage, Outcome and Performance Monitoring, provides a framework for Ongoing monitoring of the program's actions in support of its boundary partners' progress towards the achievement of outcomes. Three different types of journals (Outcome, Strategy and Performance Journals) are used to track the details of the implementation in terms of defined progress markers to be able to reflect on and improve performance and to collect meaningful data on the results. The framework emphasises the need for continuous active engagement with primary intended users in data collection, debating its significance, revealing important interim findings, sharing the learning from the process and focusing on how the results will be used.

Data Analysis

Data analysis is integral to the process and actively involves the primary intended users in interpreting findings and generating recommendations on how these results will be used. Through the ongoing dialogue, time was spent on organising data to be understandable and relevant and to distinguish between findings, interpretations, judgements and recommendations. The emphasis is on primary intended users increasing their sense of ownership through their involvement in the process. This issue will be taken further in the final section of the document.

Facilitation of Use

The facilitation of use is built into the project design, but attention needs to be paid to potential uses and users beyond the current programme scope and how this facilitation will be done. The overall use facilitation strategy needs is dealt with at the end of the document.

Meta-evaluation

The follow-up on users and uses will depend on how the CSIR, IDRC and the project partners will follow up and stay involved in facilitating use beyond the formal reporting requirements.


1. Utilization-Focused Evaluation (U-FE) CHECKLIST Michael Quinn Patton January 2002