果冻直播视频

Close

From conception to implementation and scale-up: the role of evaluation throughout the program lifecycle聽

A blog by Anu Rangarajan, Student Liaison Officer for the Centre for Evaluation.
A male nurse is delivering health education to clinic attendees, Uganda

Between 2015 and 2021, approximately went towards supporting maternal, newborn and child health annually. Various interventions were tested and implemented to reduce maternal and child mortality and morbidity in low-resource settings. Despite this vast effort, close to in 2020 alone during and following pregnancy. And, the same year, about under the age of five years died every day. Most of these maternal and child deaths were from preventable and treatable causes. 

Every year and with every major health intervention, the story is the same. Billions of dollars are spent on water and sanitation, yet a large share of the world鈥檚 population lack access to these services; there is increased global focus and spending on nutrition, yet millions of children are affected by stunting or wasting; billions of dollars are spent on malaria control and elimination, yet millions are infected by malaria each year; and the list goes on. Why?

To find an answer, social scientists have spent the past two decades doing impact evaluations to identify interventions that work. Howard White, Director of Evaluation and Evidence Synthesis at the Global Development Network, talks about  revolution: starting with outcomes monitoring in the 1990s, to a shift to impact evaluations during the 2000s and 2010s, and a focus on systematic reviews and knowledge brokering as the third and fourth waves, respectively. Unsurprisingly, researchers frequently find that the program under evaluation failed to show impacts. Indeed, 鈥.鈥&苍产蝉辫;

Impact evaluations, however, don鈥檛 explain why so many programs fail, and what can be done to improve the chances of a program鈥檚 success. Many social programs, particularly in low-resource settings, are attempting to tackle complex, deep-rooted problems, in contexts with limited financial resources, overburdened staff, low skill levels, and where attitudes and cultural beliefs make facilitating change particularly challenging. Solutions are not easy or straightforward and require rigorous testing and iterations to identify promising strategies that can be effective and eventually get implemented at scale. Evaluation approaches that focus on impact alone neglect these stages of program development and rollout.   

Evaluation can and should play an important role across the entire 鈥渓ife cycle鈥 of a project or program: it can inform decisions to help set up strong program designs, rigorously assess its implementation, and if effective, inform scale-up. For instance, developmental evaluations are appropriate for interventions being designed in complex situations, and rapid cycle evaluations can be used to test programmatic elements during the design phase, or to test alternative approaches to service delivery when implementation is not going according to plan. Similarly, implementation efforts are most effective when they are informed by factors that may emerge to facilitate or challenge them (also known as determinants). Additionally, a number of frameworks are available to assess what types of interventions or concepts should be considered for scale-up and how to bring in measurement across the different stages of scale-up. Deliberately building in these evaluation elements through a project life cycle can help increase the chance of successful programs.  

Having worked in the evaluation field for over thirty years, and after conducting rigorous impact evaluations of a range of social programs, many of which showed little or no impact, I have come to appreciate the wide range of evaluation methodologies that exist but are, unfortunately, little used in program design and implementation. Some of these methodologies are from slightly different disciplines or areas of specialization, and not readily accessible to evaluation teams. To bridge this gap, I worked closely with leading experts and eminent scholars on program evaluation to produce an edited Handbook of Program Design and Implementation Evaluation (published by Oxford University Press), which brings under one roof these methodologies applicable across the life cycle of a program. They include how to conduct developmental evaluations, perform rapid-cycle evaluations, employ implementation science concepts, measure cost-effectiveness, scale up promising interventions, and evaluating change in systems. Incorporating such approaches at appropriate stages of a program鈥檚 life cycle can help improve its chances of success; doing so consistently across social programs should help improve their impact overall. This much-needed handbook, to be released later this summer, will serve as a valuable resource for social researchers, faculty and students, program practitioners, policy analysts, and funders of social programs and evaluations.

If you would like to get in touch with Anu, please email: Anuradha.Rangarajan1@student.lshtm.ac.uk

Short Courses

LSHTM's short courses provide opportunities to study specialised topics across a broad range of public and global health fields. From AMR to vaccines, travel medicine to clinical trials, and modelling to malaria, refresh your skills and join one of our short courses today.