果冻直播视频

Close

Process evaluation

Randomised controlled trials (RCTs) are considered the most rigorous way to evaluate intervention effectiveness. However, it is not enough for an evaluation to report solely on effectiveness. Evaluations should also provide information on the planning, delivery, and uptake of the intervention, the causal pathways through which the intervention is expected to act, and the contextual factors affecting the implementation and outcomes of the intervention 鈥 the process that takes place.

Key Resources For Learning About Process Evaluation

There is a growing call for the need to combine evaluation of outcomes with evaluation of process in evaluations of complex interventions. The UK Medical Research Council (MRC) made this recommendation in their updated guidance on the evaluation of complex interventions:

Developing and evaluating complex interventions: the new Medical Research Council guidance
Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, . Developing and evaluating complex interventions: the new Medical Research Council guidance. Bmj. 2008 Sep 29;337:a1655.

They state that including a process evaluation is important to explain discrepancies between expected and observed outcomes, to understand how context influences outcomes, and to provide insights to aid implementation in other contexts.

The UK Medical Research Guidance to Process Evaluation is a helpful resource to aid conduct of process evaluations. It offers a comprehensive review of process evaluation theory and a practical guide on the planning, design, conduct, reporting and appraisal of process evaluations of complex interventions. This was accomplished through an extensive review of the literature examination of case-studies, and consultations with stakeholders.

Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance
Moore G, Audrey S, Barker M, , Hardeman W, Moore L, O鈥機athain A, Tinati T, Wight D, Baird J. Process evaluation of complex interventions: UK Medical Research Council (MRC) guidance. 2014.

The following book is another useful resource. It provides a review of the history and evolution of process evaluation as described in health education, health promotion and health behaviour literature. It also identifies and defines components of process evaluation and provides a guidance to their design. The examples included in the book vary and are organized into four categories: community, worksite, school, and national and state. The book may be most useful to those with sizeable knowledge about program evaluation and research.


Linnan, L. and A. Steckler (2002). San Francisco CA, Jossey Bass.

In a publication by Oakley et al., the authors use an example from the RIPPLE study: a pupil-led sex education intervention implemented in secondary schools in England, to argue that including a process evaluation would improve the science of many randomised controlled trials and outlines a framework for using process evaluation as an integral element of RCTs.


Oakley A, Strange V, , Allen E, Stephenson J & the RIPPLE Study Team. Process evaluation in the design of randomised controlled trials of complex interventions. British Medical Journal 2006; 332: 413-416.

A publication by Bonell et al., examined the issue of generalisability, referring to the lack of a framework to guide the assessment and reporting of generalisability of trials to inform policy and practice decisions. The authors suggest that documenting the delivery of an intervention by embedding an evaluation of process in trials could aid generalisability assessment.


, Oakley A, , Strange V, Rees R. Trials of health interventions and empirical assessment of generalizability: suggested framework and systematic review. British Medical Journal 2006; 333: 346-349.

Implementation

What was the intervention? Can success or failure be attributed to the inherent intervention or its implementation?

Process evaluations attempt to document how an intervention is implemented and what was actually delivered, compared with that intended to be delivered. Implementation can be examined in terms of fidelity, reach, dose delivered as well any unanticipated additional activities or adaptations that had to be made to an intervention in a given context, such as changes to the content.

In the following article, the authors featured 162 outcome studies of primary and early secondary prevention programs published between 1980 and 1994 and examined the extent to which fidelity was verified and promoted and to which dose was documented in these evaluations. They also discuss the extent to which inconsistencies in fidelity and dosage may compromise the internal validity of evaluations of preventive interventions, and the potential effectiveness of these programs.


Dane, U. A., & Schneider, B. H. (1998). Program integrity in primary and early secondary prevention: are implementation effects out of control? Clinical Psychology Review, 18(1), 23e45.

In the following article Sharon Mihalic from the Blueprints for Violence Prevention Initiative, an initiative to identify and implement model violence prevention and control programs in the State of Colorado, discusses the importance of implementation fidelity in preserving the behaviour change mechanisms that make the original models effective.


Mihalic, S. (2004). The importance of implementation fidelity. Emotional and Behavioral Disorders in Youth 4(4): 83鈥105.

A different publication by Durlak et al. examines 500 studies to assess the degree to which the level of implementation affects the outcomes obtained in promotion and prevention programs, as well as identify factors that affect the implementation process. They conclude that the collection of implementation data is an essential feature of program evaluations, and that more information is needed on which factors influence implementation and how they influence implementation in different community settings.


Durlak, J. A., & Dupre, E. P. (2008). Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41, 327e350.

The issue of the appropriate balance between maintaining strict implementation fidelity versus enabling local adaptation is addressed in a different way in a paper by Penny Hawe and colleagues. They argue that in the case of complex interventions, fidelity should be judged based not on whether the precise form of delivery is faithful to what was intended but rather on the basis of fidelity of function 鈥 whether what is delivered locally should enable the achievement of the mechanisms of causation described in the intervention鈥檚 theory of change.


Hawe, P., Shiell, A., et al. (2004a). Complex interventions: how 鈥渙ut of control鈥 can a randomised controlled trial be? British Medical Journal, 328, 1561e1563.

The following publication argues that this applied research in the implementation of interventions is a focus in the field of 鈥業mplementation Science鈥 and should be founded on theory that provides a foundation for understanding, designing, predicting, and evaluating dynamic implementation processes. The purpose of this paper is to contribute to a theoretical framework that characterizes and explains implementation processes in terms of the social processes that lead from inception to practice. It does so by building on Normalization Process Theory and integrating it with other social and cognitive theories from the disciplines of sociology and psychology.


May, C. (2013) Towards a general theory of implementation. Implementation Science, 8, 1, 18.

The use of mixed method designs in implementation research has been increasingly utilised to develop an understanding of and overcome barriers to implementation. The following literature review examines 22 studies of mental health services research between 2005 and 2009 that applied mixed methods in their implementation research and reported on how these methods were used and reasons for their use.


Palinkas LA, Aarons GA, Horwitz S, Chamberlain P, Hurlburt M, Landsverk J. Mixed method designs in implementation research. Administration and Policy in Mental Health and Mental Health Services Research. 2011 Jan 1;38(1):44-53.

Mechanisms of Impact

Did the intervention have its intended effects? Can success or failure be attributed to the intended mechanism of change?

It can be helpful to distinguish a focus on 鈥榤echanisms鈥 as the way change occurs once an intervention has been initiated, from 鈥榠mplementation鈥 as the initial delivery of the intervention. The study of mechanisms includes a study of participant responses to the intervention, understanding how change is happening, and capturing the unintended consequences that may result from the intervention.

How the intervention will be received by participants can be examined in terms of its acceptability and dose received.

How change occurs and what shapes it is a fundamental question for many academic disciplines, which take different theory-driven approaches to understand. The LSHTM Centre for Evaluation brings together researchers from a range of disciplinary backgrounds who try to understand pathways of change through these approaches:

Theory of Change (ToC) and Diagrammatic Logic Models

The theory of change explains how an intervention is intended to produce the desired effect. It entails making hypothesis about the causal mechanisms by which the components and activities of an intervention will lead to outcomes. The logic model is a diagrammatic presentation of the theory of change.

The UK Medical Research Council鈥檚 (MRC) guidance to the development and evaluation of complex intervention notes that identifying and developing a theoretical understanding of the likely process of change is a key early task for developing a complex intervention or evaluating one that has already been developed.

Developing and evaluating complex interventions: the new Medical Research Council guidance
Craig P, Dieppe P, Macintyre S, Michie S, Nazareth I, . Developing and evaluating complex interventions: the new Medical Research Council guidance. Bmj. 2008 Sep 29;337:a1655.

The ToC approach has been successfully used to design, implement and evaluate complex community initiatives, and more recently has been applied to complex health interventions, including at LSHTM. The way a ToC may improve the evaluation of interventions is by combining evaluations of intervention effectiveness with detailed process evaluations into one theoretical framework. The ToC approach to evaluation tests the hypothesised causal mechanisms with what is observed to have happened. This allows identifying which components and activities were strong and effective in achieving the outcomes, and which were weak or absent causing limited effects.

The ToC and logic model are developed in collaboration with stakeholders and draw on various sources of information, including academic theories and evidence from a range of disciplines, program experience, health care provider鈥檚 insights and service users and carer insights.

Evaluations based on the ToC have sometimes been referred to as theory-based evaluations. In the following article, the author describes how theory-based evaluation allow an understanding of how programs work by examining how the hypothesised causal mechanisms are occurring in practice, as well as how successfully implementation is achieved.


Weiss CH. Theory鈥恇ased evaluation: Past, present, and future. New directions for evaluation. 1997 Dec 1;1997(76):41-55.

In a review by Rogers et al., the authors highlight that this approach of basing an evaluation on a logic, or causal, model is not new and has been recommended by evaluators since the 1960s. They refer to it as the 鈥榩rogram theory evaluation鈥 and describe its historical development, the current variations in theory and practice, and discuss the problems it poses.


Rogers, P.J., Petrosino, A., Huebner, T.A. and Hacsi, T.A. (2000) Program theory evaluation: Practice, promise, and problems, New Directions for Evaluation, 2000, 87, 5-13.

The following guide provides an orientation to the underlying principles and language of the logic model so it can be effectively used in program planning, implementation, and evaluation. It also offers a range of exercises and examples focused on the development of a logic model that reflects the underlying theory of change.


Kellogg Foundation, W.K. (2004) Logic model development guide, Battle Creek, MI: W.K Kellogg Foundation.

The following is another useful guide to developing logic models

ToC can also be used to predict whether interventions might have harmful effects, both to mitigate and evaluate these.


, Jamal F, Melendez-Torres GJ, . 鈥淒ark logic鈥 鈥 theorising the harmful consequences of public health interventions. Journal of Epidemiology and Community Health 2015;69(1):95-8.

Theory about the mechanisms and contextual factors that may affect them can be quantitatively tested through causal modelling. Mediation analysis is used to examine the hypothesised mechanisms, and analysis of moderation can be used to examine the contextual factors that may influence the effect of the intervention.


Hardeman, W., Sutton, S., Griffin, S., Johnston, M., White, A., Wareham, N.J. and Kinmonth, A.L. (2005) A causal modelling approach to the development of theory-based behaviour change programmes for trial evaluation. Health education research, 20, 6, 676-687.

Examples

尝厂贬罢惭鈥檚&苍产蝉辫; is using Theory of Change in the implementation and evaluation of two major trials:


Breuer E, , Fekadu A, Luitel NP, Murhar V, Nakku J, Petersen I, Lund C. Usingworkshops to develop theories of change in five low and middle income countries: lessons from the programme for improving mental health care (PRIME). International journal of mental health systems. 2014 Apr 30;8(1):1.


, Breuer E, Lee L, , Chowdhary N, Lund C, . Theory of Change: a theory-driven approach to enhance the Medical Research Council鈥檚 framework for complex interventions. Trials. 2014 Jul 5;15(1):1.

Realist Evaluation

This approach to evaluation understands change as the results of actions of social agents operating in a specific context whereby the action leads to outcomes by triggering mechanisms. It emphasises that mechanisms are contingent on the context and that outcomes are produced by the interaction of context and mechanisms. Therefore, evaluations are based on these Context-Mechanism-Outcome configurations to explain 鈥榳hat works for whom in what circumstances and in what respects, and how?鈥 This is especially useful in exploring complex social phenomena.

Pawson and Tilley founded this approach to evaluations and produced the following book:


Pawson, R., & Tilley, N. (1997). London: Sage.

There is a controversy in the field as to whether realist evaluation is an alternative or complement to the use of randomized controlled trials. The following publications are examples of the differing views:


, Fletcher A, Morton M, ., Moore L. 鈥楻ealist Randomised Controlled Trials鈥: a new approach to evaluating complex public health interventions. Social Science and Medicine 2012;75(12):2299-306.


Marchal BWesthorp GWong GGreenhalgh TKegels GPawson R. Realist RCTs of complex interventions 鈥 an oxymoron. Soc Sci Med. 2013 94:124-8.


, Fletcher A, Morton M, , Moore L. Methods don鈥檛 make assumptions, researchers do: a response to Marchal et al. Social Science and Medicine 2013; 94: 81-2.


Wong GWesthorp GPearson MEmmel NManzano AMarchal B. Can 鈥渞ealist鈥 randomised controlled trials be genuinely realist? Trials. 2016;17(1):313.

Examples

Examples of such research at LSHTM include the current work done by Sara Van Belle on strategies, mechanisms and conditions to ensure public accountability at the local health system level in low and middle income countries and the work of Chris Bonell and colleagues on realist trials:


, , Fletcher A, Viner R. Realist trials and the testing of context-mechanism-outcome configurations: a response to Van Belle et al. Trials 2016 Oct 1;17(1):478.


., Marchal, B., Dubourg, D., Kegels, G. (2010) How to develop a theory-driven evaluation design? Lessons learned from an adolescent sexual and reproductive health programme in West Africa, BMC Public Health, 10, 741.


Marchal, B.,., Van Olmen, J., Hoeree, T., Kegels, G. (2012) Is Realist Evaluation keeping its promise? A literature review of  methodological practice in Health Systems Research, Evaluation, 18, 192-212.

Context

Contextual factors shape the theory of change and affect the implementation, causal mechanisms, and outcomes of an intervention. Process evaluators should capture how context is affected by an intervention, as well as how contextual factors can change an intervention.

Evaluators capture how contextual factors affect implementation by considering which components of the intervention had to be adapted, or modified, to fit the context. Contextual factors may also affect how the target audience receive and react to the intervention thereby influencing hypotheses of causal mechanisms which are generated with consideration as to how contextual factors might strengthen or weaken the intervention, and thereby affect outcomes.

Therefore, contextual factors should be considered across all aspects of a process evaluation.

The following examples stress the importance of considering context in the implementation of interventions:


Bergstrom A, Skeen S, Duc DM et al (2015). Health system context and implementation of evidence-based practices鈥攄evelopment and validation of the Context Assessment for Community Health (COACH) tool for low- and middle-income settings. Implementation Science. 10; 120


Hansen, H.P. & Tj酶rnh酶j-Thomsen, T. Meeting the Challenges of Intervention Research in Health Science: An Argument for a Multimethod Research Approach. Patient (2016) 9: 193. doi:10.1007/s40271-015-0153-9