1999
DOI: 10.1177/001789699905800406
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating community development for health: a survey of evaluation activity across the Lothians

Abstract: This research examines the experiences of project workers in one ScottishHealth Board area in evaluating community development for health initiatives. A field study was undertaken, involving semi-structured telephone interviews with workers from 15 community health projects across the Lothians: Supplementary data were gathered by documentary analysis. A predominance of process evaluation was found, with a bias toward the use of qualitative methods. Impact evaluation was also undertaken. Lack of resources emerg… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2003
2003
2009
2009

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 2 publications
0
3
0
Order By: Relevance
“…The increasing emphasis on evidence-based practice in public health (Coombes and Thorogood 2004) and greater accountability in the use of resources within the health sector worldwide add to pressure to demonstrate effectiveness (Nutbeam 1999). Funders have tended to favour evaluation research that focuses on measuring effectiveness in strictly quantitative terms, owing to their own need for accountability and to demonstrate 'value for money' (Smart 1999). Injuries lend themselves well to outcome evaluations relatively early in the programme history since they can be thought of as short 'incubation period' diseases (Thompson and Sacks 2001).…”
Section: Discussionmentioning
confidence: 99%
“…The increasing emphasis on evidence-based practice in public health (Coombes and Thorogood 2004) and greater accountability in the use of resources within the health sector worldwide add to pressure to demonstrate effectiveness (Nutbeam 1999). Funders have tended to favour evaluation research that focuses on measuring effectiveness in strictly quantitative terms, owing to their own need for accountability and to demonstrate 'value for money' (Smart 1999). Injuries lend themselves well to outcome evaluations relatively early in the programme history since they can be thought of as short 'incubation period' diseases (Thompson and Sacks 2001).…”
Section: Discussionmentioning
confidence: 99%
“…Perhaps the most appropriate indicator of a relevant health promotion evaluation method is that good evaluation reflects the values of the activities that are to be measured (MacDonald 1998, Smart 1999). Rolls (1999) argues that health promotion activities need to be evaluated in their own terms and not underpinned by different criteria and values than those on which they are based.…”
Section: The Dilemma Of Which Methods For Which Taskmentioning
confidence: 99%
“…Injury incidence is usually considered the most important outcome indicator in injury prevention programmes (Benson 1995, Menckel 1999. Unquestionably, for many, reducing injury frequency is 'hard evidence' of effectiveness and the growing need to prove value-for-money has meant that funders favour evaluation research that measures effectiveness in quantitative terms (Smart 1999). However, the rather narrow definition of injury prevention success as injury incidence reduction is not uncontested .…”
Section: Assessed Elementsmentioning
confidence: 99%