1997
DOI: 10.1016/s0734-3310(97)90036-7
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation of academic librarians' instructional performance: Report of a national survey

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2002
2002
2017
2017

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 22 publications
(10 citation statements)
references
References 5 publications
0
10
0
Order By: Relevance
“…The American Library Association's manual (Shonrock, 1996) for evaluating library instruction is heavily weighted towards formative evaluation; twelve of the fourteen sections that present model questions for evaluation instruments are devoted to issues related to instructional programs rather than learning outcomes. Perhaps Ragains' (1997) report on the results of a national survey of evaluation practices should come as no surprise; nearly 75% of respondents report the use of "reaction data" (satisfaction surveys), while less than half that report testing student knowledge. This reliance primarily on reaction or attitudinal data has been criticized for failing to measure what students actually learn (Colborn & Cordell, 1999): "At most, it provides information about how the student perceives the librarian's presentation and how he or she feels about the library and/or librarian" (p. 125).…”
Section: Assessment Of Library Instructionmentioning
confidence: 99%
“…The American Library Association's manual (Shonrock, 1996) for evaluating library instruction is heavily weighted towards formative evaluation; twelve of the fourteen sections that present model questions for evaluation instruments are devoted to issues related to instructional programs rather than learning outcomes. Perhaps Ragains' (1997) report on the results of a national survey of evaluation practices should come as no surprise; nearly 75% of respondents report the use of "reaction data" (satisfaction surveys), while less than half that report testing student knowledge. This reliance primarily on reaction or attitudinal data has been criticized for failing to measure what students actually learn (Colborn & Cordell, 1999): "At most, it provides information about how the student perceives the librarian's presentation and how he or she feels about the library and/or librarian" (p. 125).…”
Section: Assessment Of Library Instructionmentioning
confidence: 99%
“…Similarly, the large body of works on library instruction has only a small subset that focuses on evaluation. Studies measuring how much learning has occurred in the one-time session, or how it has transformed the learner, are not easily found in the literature because -the evaluation of library instruction tends to focus upon attendees' perceptions of the librarian's performance.‖ 19 As such, librarians rarely have a way of knowing how much students have learned. The situation is even more difficult for assessing instruction to LEP students; there is hardly any material in the literature that deals with assessing the output of LEP students.…”
Section: Output/learning Outcomesmentioning
confidence: 99%
“…Previous surveys of the literature have found minimal evidence that evaluation in library instruction programs includes meaningful assessment of student learning (Ragains, 1997;Colborn, 1998;Warner, 2003). In his survey of 44 academic library instruction coordinators, Ragains (1997) concluded that their most frequently gathered assessment data is "reaction data" such as student or faculty satisfaction surveys.…”
Section: Assessment: Why Is It So Elusive?mentioning
confidence: 99%