This paper proposes that the field of AIED is now mature enough to break away from being delivered mainly through computers and pads so that it can engage with students in new ways and help teachers to teach more effectively. Mostly, the intelligent systems that AIED has delivered so far have used computers and other devices that were essentially designed for businesses or personal use, and not specifically for education. The future holds the promise of creating technologies designed specifically for learning and teaching by combining the power of AIED with advances in the field of robotics and in the increasing use of sensor devices to monitor our surroundings and actions. The paper assumes that Bschools^(i.e., a place where children will gather to learn) will still exist in some shape or form in 25 years and that teachers will continue to oversee and promote learning among the students. It proposes that there will be educational cobots assisting teachers in the classrooms of tomorrow and provides examples from current work in robotics. It also envisions smart classrooms that make use of sensors to support learning and illustrates how they might be used in new ways if AIED applications are embedded into them.
This research examines science-simulation software available for grades 6-12 science courses. The study presented, funded by the National Science Foundation, had two objectives: a literature synthesis and a product review. The literature synthesis examines research findings on grade 6-12 student learning gains and losses using virtual laboratories and science-simulation software, derived from a review of 79 relevant studies identified. Based on that literature, significant aspects of how such products influence student learning are identified. Tables summarize the research-based evidence about best practices in instructional design for such virtual lab and simulation products. Some products were then reviewed as case studies to determine in what ways and to what extent they implement such research-identified best practices. The overall goal was to consider where the most progress is being made in effective virtual-lab and simulation products, and what directions future development should take. The intent is to inform science educators, teachers, administrators, and policy makers who are using, buying, and examining middle and high school instructional materials. ß Ã Note also that an additional pattern (Pattern I, see Appendix A for codes) was identified, but this simply represented that, in the research study, no particular learning design principles were identified in the results, and only overall intervention results were reported. a Includes developing research questions, designing experiments, setting up projects, getting and analyzing data, noisy data and error analysis, synthesizing results, and scaffolding needs for successful inquiry. b Includes providing opportunities for scientific discussion and debate.
This article reports on the collaboration of six states to study how simulation‐based science assessments can become transformative components of multi‐level, balanced state science assessment systems. The project studied the psychometric quality, feasibility, and utility of simulation‐based science assessments designed to serve formative purposes during a unit and to provide summative evidence of end‐of‐unit proficiencies. The frameworks of evidence‐centered assessment design and model‐based learning shaped the specifications for the assessments. The simulations provided the three most common forms of accommodations in state testing programs: audio recording of text, screen magnification, and support for extended time. The SimScientists program at WestEd developed simulation‐based, curriculum‐embedded, and unit benchmark assessments for two middle school topics, Ecosystems and Force & Motion. These were field‐tested in three states. Data included student characteristics, responses to the assessments, cognitive labs, classroom observations, and teacher surveys and interviews. UCLA CRESST conducted an evaluation of the implementation. Feasibility and utility were examined in classroom observations, teacher surveys and interviews, and by the six‐state Design Panel. Technical quality data included AAAS reviews of the items' alignment with standards and quality of the science, cognitive labs, and assessment data. Student data were analyzed using multidimensional Item Response Theory (IRT) methods. IRT analyses demonstrated the high psychometric quality (reliability and validity) of the assessments and their discrimination between content knowledge and inquiry practices. Students performed better on the interactive, simulation‐based assessments than on the static, conventional items in the posttest. Importantly, gaps between performance of the general population and English language learners and students with disabilities were considerably smaller on the simulation‐based assessments than on the posttests. The Design Panel participated in development of two models for integrating science simulations into a balanced state science assessment system. © 2012 Wiley Periodicals, Inc. J Res Sci Teach 49: 363–393, 2012
How can assessments measure complex science leaming? Although traditional, multiple-choice items can effectively measure declarative knowledge such as scientific facts or definitions, they are considered less well suited for providing evidence of science inquiry practices such as making observations or designing and conducting investigations. Thus, students who perform very proficiently in "science" as measured by static, conventional tests may have strong factual knowledge but little ability to apply this knowledge to conduct meaningful investigations. As technology has advanced, interactive, simulation-based assessments have the promise of capturing information about these more complex science practice skills. In the current study, we test whether interactive assessments may be more effective than traditional, static assessments at discriminating student proficiency across 3 types of science practices: (a) identifying principles (e.g., recognizing principles), (b) using principles (e.g., applying knowledge to make predictions and generate explanations), and (c) conducting inquiry (e.g., designing experiments). We explore 3 modalities of assessment: static, most similar to traditional items in which the system presents still images and does not respond to student actions, active, in which the system presents dynamic portrayals, such as animations, which students can observe and review, and interactive, in which the system depicts dynamic phenomena and responds to student actions. We use 3 analyses-a generalizability study, confirmatory factor analysis, and multidimensional item response theory-to evaluate how well each assessment modality can distinguish performance on these 3 types of science practices. The comparison of perfoimance on static, active, and interactive items found that interactive assessments might be more effective than static assessments at discriminating student proficiencies for conducting inquiry.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.