2018 ASEE Annual Conference &Amp; Exposition Proceedings
DOI: 10.18260/1-2--30630
|View full text |Cite
|
Sign up to set email alerts
|

Improving Instruction and Assessment via Bloom’s Taxonomy and Descriptive Rubrics

Abstract: She joined SJSU in 2015, and her research is focused on thermo-fluids problems in sustainable energy, particular the effect of turbulence on a wide variety of technologies. She teaches courses in thermodynamics, fluid mechanics, and heat transfer, and she is interested in studying the intersection of pedagogy and assessment.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(13 citation statements)
references
References 13 publications
0
13
0
Order By: Relevance
“…We primarily considered research on accreditation topics in popular engineering education and educational psychology journals and conference proceedings spanning the last 15 years. The results of the literature review were parsed using an OBE theory based qualitative analysis of CQI systems to yield the summary below: PIs are used to assess SOs and are mostly generic thereby lack the required specificity for valid and reliable assessment and evaluation [20,23,[25][26]28,[30][31]36,[39][40][41]45,48,56,57,58] Most rubrics are generic and vague using simplistic language and lack technical details to accurately assess several hundred complex engineering activities [20,23,26,41,56,57,58] Generic rubrics are applied by independent raters to score past course portfolios [36,[40][41]45…”
Section: Sos Data For Manual Cqi Systems-a Qualitative Analysismentioning
confidence: 99%
See 2 more Smart Citations
“…We primarily considered research on accreditation topics in popular engineering education and educational psychology journals and conference proceedings spanning the last 15 years. The results of the literature review were parsed using an OBE theory based qualitative analysis of CQI systems to yield the summary below: PIs are used to assess SOs and are mostly generic thereby lack the required specificity for valid and reliable assessment and evaluation [20,23,[25][26]28,[30][31]36,[39][40][41]45,48,56,57,58] Most rubrics are generic and vague using simplistic language and lack technical details to accurately assess several hundred complex engineering activities [20,23,26,41,56,57,58] Generic rubrics are applied by independent raters to score past course portfolios [36,[40][41]45…”
Section: Sos Data For Manual Cqi Systems-a Qualitative Analysismentioning
confidence: 99%
“…As per Adelman (2015), language of PIs should be specific to accurately align with course content and student learning activity [20]. The PIs should be assessed in courses from all phases of a curriculum to achieve learning progression for achieving proficiency in engineering skills [20,23,26,41,52,[56][57][58]76].…”
Section: ) 'Design Down' Mapping Model From Goals To Performance Indmentioning
confidence: 99%
See 1 more Smart Citation
“… Learning models are generally not understood and used comprehensively as the founding framework for CQI efforts [3] , [16] , [42] , [49] , [62] , [67] Language of Course Outcomes (COs) and associated PIs is deficient and lacks alignment with actual learning activities [3] , [16] , [28] , [29] , [42] , [49] , [62] – [64] , [67] . PIs are mostly generic and lack the required specificity to achieve required validity and reliability in assessment and evaluation [16] , [21] , [23] , [24] , [26] , [28] , [29] , [35] , [40] [42] , [46] , [49] , [67] – [69] . Most rubrics are generic, simplistic and vague, and lack the necessary detail to accurately assess several hundred complex student learning activities of any engineering specialization [16] , [21] , [24] , [42] , [67] – [69] .…”
Section: Research Frameworkmentioning
confidence: 99%
“…The motivation to use scoring rubrics in engineering education also is due to the lack of satisfaction emanating from the use of the traditional grading process which have been criticized for their bias, and unrealistic standards [2,9,15]. Rubrics are attractive since they can be adjusted to assess specific skills and describe precisely the expected outcomes [16]. In addition, they can convey the professor's expectations to students, make the assessment method more transparent [17], and can facilitate in providing feedbacks to the students on the quality and quantity of student learning.…”
Section: Introductionmentioning
confidence: 99%