2018
DOI: 10.1177/1098214018756578
|View full text |Cite
|
Sign up to set email alerts
|

How Evaluators Can Use a Complex Systems Lens to Get “Untrapped” From Limiting Beliefs and Assumptions

Abstract: Evaluators are becoming increasingly aware that, to provide maximum benefit to the programs they evaluate, they must address the systems within which the programs work. However, evaluation practice is often influenced by beliefs and assumptions that are rooted in an understanding of these systems as stable and predictable. We identify four traps that can limit the usefulness of program evaluations when the instability and uncertainty of systems are not acknowledged. We explore how evaluators can get “untrapped… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 17 publications
(21 reference statements)
0
9
0
Order By: Relevance
“…Evaluation theory and methods have largely been developed for evaluating discrete programs and policies and not for initiatives to change systems (Shadish et al, 1991). In the last two or so decades, scholars have challenged program evaluation arguing that underlying assumptions are ill suited to the complexity of problems and to the kinds of interventions necessary to address root causes and influence large-scale systems (e.g., Barnes et al, 2003;Gates, 2016;Hawe et al, 2009;Moore et al, 2019;Reynolds et al, 2012;Walton, 2016;Williams & Imam, 2007). For example, evaluations often presuppose a linear causal relationship between problems, interventions, and desired outcomes (Gates, 2016) (as illustrated in logic models [Fujita, 2010]) and privilege attribution of outcomes to individual interventions rather than examining the embeddedness and contributory roles of multiple initiatives in changing systems and conditions (Hawe et al, 2009;Stern et al, 2012).…”
Section: Introductionmentioning
confidence: 99%
“…Evaluation theory and methods have largely been developed for evaluating discrete programs and policies and not for initiatives to change systems (Shadish et al, 1991). In the last two or so decades, scholars have challenged program evaluation arguing that underlying assumptions are ill suited to the complexity of problems and to the kinds of interventions necessary to address root causes and influence large-scale systems (e.g., Barnes et al, 2003;Gates, 2016;Hawe et al, 2009;Moore et al, 2019;Reynolds et al, 2012;Walton, 2016;Williams & Imam, 2007). For example, evaluations often presuppose a linear causal relationship between problems, interventions, and desired outcomes (Gates, 2016) (as illustrated in logic models [Fujita, 2010]) and privilege attribution of outcomes to individual interventions rather than examining the embeddedness and contributory roles of multiple initiatives in changing systems and conditions (Hawe et al, 2009;Stern et al, 2012).…”
Section: Introductionmentioning
confidence: 99%
“…The ambition to develop novel methods that can deal with complexity and uncertainty has inspired a new wave of research and investigation. In the evaluation field, systems approaches are looked into for guidance on dealing with complexity, path dependence, actor diversity, emergence, uncertainty, and nonlinearity (Patton, 2011;Ramalingam et al, 2014;Larson, 2018;Moore et al, 2019). There are approaches that focus more on the actor diversity and social implications of complexity and there are approaches that focus more on non-linearity and uncertainty in systems.…”
Section: Approaches To Plan Monitor and Evaluate The Long-term Sustainability Of Development Interventionsmentioning
confidence: 99%
“…There are approaches that focus more on the actor diversity and social implications of complexity and there are approaches that focus more on non-linearity and uncertainty in systems. Examples of the first are the actororiented evaluation approach by Van Ongevalle et al (2014), game-theory methods for evaluation (Hermans et al, 2014) and methods such as network analysis and agent-based modeling (Ramalingam et al, 2014;Moore et al, 2019). Examples of approaches to deal with uncertainty and emergence in systems include outcome mapping (Earl et al, 2001), developmental evaluation (Patton, 2011;Lawrence et al, 2018), problem-driven iterative adaptation (Andrews et al, 2013), complexity informed theories of change (Ramalingam et al, 2014), outcome harvesting (Wilson-Grau and Britt, 2012), and strategy testing (Ladner, 2015).…”
Section: Approaches To Plan Monitor and Evaluate The Long-term Sustainability Of Development Interventionsmentioning
confidence: 99%
“…Evaluators implementing experimental and quasi-experimental research designs often include different methods that aim to deal with complexity. Qualitative approaches can be used to triangulate findings and to explain the differences in outcomes experienced by participants (Bamberger et al, 2010; Haymand, 2013; Pierre, 2004; Reichardt and Mark, 2004; Stern et al, 2012 cited in Moore et al, 2018). But what is significant to note is that such data tends to be used through establishment-oriented evaluation to explain the limitations and ‘contaminations’ of their study (Moore et al, 2018: 9–10).…”
Section: Privileging Pre-determined Relations Of Cause and Effect Andmentioning
confidence: 99%
“…Qualitative approaches can be used to triangulate findings and to explain the differences in outcomes experienced by participants (Bamberger et al, 2010; Haymand, 2013; Pierre, 2004; Reichardt and Mark, 2004; Stern et al, 2012 cited in Moore et al, 2018). But what is significant to note is that such data tends to be used through establishment-oriented evaluation to explain the limitations and ‘contaminations’ of their study (Moore et al, 2018: 9–10). Data on complexity is very rarely used to improve the capacity for learning about dynamic and unexpected interactions that happen within the systems in which the intervention is situated.…”
Section: Privileging Pre-determined Relations Of Cause and Effect Andmentioning
confidence: 99%