Proceedings of the 33rd Annual ACM SIGUCCS Conference on User Services 2005
DOI: 10.1145/1099435.1099502
|View full text |Cite
|
Sign up to set email alerts
|

Tying benchmarks and metrics to evaluations and organizational performance

Abstract: Mission, vision, and objectives statements are standard items created for most information technology units. Alignment of these with both the overall University mission and individual staff performance goals is often weak or lacking. Building upon the work of Kohrman and Trinkle [1], objectives for Indiana State University's Instructional and Research Technology Services (IRTS) were written as facilitating activities and built to be SMART (Specific, Measurable, Aggressive but attainable, Rewarding, and Time-bo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2009
2009
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 2 publications
(1 reference statement)
0
1
0
Order By: Relevance
“…In conducting a literature review on metrics in instructional technology, the author found few papers on the topic, but one paper in particular had a nice quote: "Setting benchmarks and metrics requires time and commitment, but in the end, the fruits of that labor are obvious and help each unit prepare their story of their contributions to the University [2]. "…”
Section: Discussionmentioning
confidence: 99%
“…In conducting a literature review on metrics in instructional technology, the author found few papers on the topic, but one paper in particular had a nice quote: "Setting benchmarks and metrics requires time and commitment, but in the end, the fruits of that labor are obvious and help each unit prepare their story of their contributions to the University [2]. "…”
Section: Discussionmentioning
confidence: 99%
“…In this context, computational resources are often considered a useful node for AI governance. Since there is a strong correlation between the amount of compute used for training and the capabilities of the resulting model [26], detecting where large amounts of compute are being used may enable governments to develop early awareness of which actors are likely to be developing and deploying highly capable systems [27]. This could be possible if developers report their training resources or if regulators monitor training runs or even the semiconductor supply chain [28].…”
Section: Theoretical Proposals To Regulate Frontier Ai Modelsmentioning
confidence: 99%