Does knowledge about which objects and settings tend to co-occur affect how people interpret an image? The effects of consistency on perception were investigated using manipulated photographs containing a foreground object that was either semantically consistent or inconsistent with its setting. In four experiments, participants reported the foreground object, the setting, or both after seeing each picture for 80 ms followed by a mask. In Experiment 1, objects were identified more accurately in a consistent than an inconsistent setting. In Experiment 2, backgrounds were identified more accurately when they contained a consistent rather than an inconsistent foreground object. In Experiment 3, objects were presented without backgrounds and backgrounds without objects; comparison with the other experiments indicated that objects were identified better in isolation than when presented with a background, but there was no difference in accuracy for backgrounds whether they appeared with a foreground object or not. Finally, in Experiment 4, consistency effects remained when both objects and backgrounds were reported. Semantic consistency information is available when a scene is glimpsed briefly and affects both object and background perception. Objects and their settings are processed interactively and not in isolation.
How does context influence the perception of objects in scenes? Objects appear in a given setting with surrounding objects. Do objects in scenes exert contextual influences on each other? Do these influences interact with background consistency? In three experiments, we investigated the role of object-to-object context on object and scene perception. Objects (Experiments 1 and 3) and backgrounds (Experiment 2) were reported more accurately when the objects and their settings were consistent than when they were inconsistent, regardless of the number of foreground objects. In Experiment 3, related objects (from the same setting) were reported more accurately than were unrelated objects (from different settings), independently of consistency with the background. Consistent with an interactive model of scene processing, both object-to-object context and object-background context affect object perception.
How can we help students develop an understanding of chemistry that integrates conceptual knowledge with the experimental and computational procedures needed to apply chemistry in authentic contexts? The current work describes ChemVLab+, a set of online chemistry activities that were developed using promising design principles from chemistry education and learning science research: setting instruction in authentic contexts, connecting concepts with science practices, linking multiple representations, and using formative assessment with feedback. A study with more than 1400 high school students found that students using the online activities demonstrated increased learning as evidenced by improved problem solving and inquiry over the course of the activities and by statistically significant improvements from pre- to posttest. Further, exploratory analyses suggest that students may learn most effectively from these materials when the activities are used after initial exposure to the content and when they work individually rather than in pairs.
How can assessments measure complex science leaming? Although traditional, multiple-choice items can effectively measure declarative knowledge such as scientific facts or definitions, they are considered less well suited for providing evidence of science inquiry practices such as making observations or designing and conducting investigations. Thus, students who perform very proficiently in "science" as measured by static, conventional tests may have strong factual knowledge but little ability to apply this knowledge to conduct meaningful investigations. As technology has advanced, interactive, simulation-based assessments have the promise of capturing information about these more complex science practice skills. In the current study, we test whether interactive assessments may be more effective than traditional, static assessments at discriminating student proficiency across 3 types of science practices: (a) identifying principles (e.g., recognizing principles), (b) using principles (e.g., applying knowledge to make predictions and generate explanations), and (c) conducting inquiry (e.g., designing experiments). We explore 3 modalities of assessment: static, most similar to traditional items in which the system presents still images and does not respond to student actions, active, in which the system presents dynamic portrayals, such as animations, which students can observe and review, and interactive, in which the system depicts dynamic phenomena and responds to student actions. We use 3 analyses-a generalizability study, confirmatory factor analysis, and multidimensional item response theory-to evaluate how well each assessment modality can distinguish performance on these 3 types of science practices. The comparison of perfoimance on static, active, and interactive items found that interactive assessments might be more effective than static assessments at discriminating student proficiencies for conducting inquiry.
At what stage does semantic priming affect accuracy in target search? In two experiments, participants viewed two streams of stimuli, each including a target word among distractors. Stimulus onset asynchronies (SOAs) between the targets (T1 and T2) ranged from 53 to 213 msec. A word semantically related to one or neither of the targets preceded each trial. In Experiment 1, participants were instructed to report both targets. Although more primed than unprimed targets were reported, there was no cost for unprimed words. A strong interaction between SOA and T1 versus T2 was found, but priming did not interact with either variable. In Experiment 2, only related targets were reported. Performance was similar to that for primed targets in Experiment 1. Semantic priming does not seem to modulate how attentional resources are initially allocated between targets, but instead affects a later stage of processing, the point at which a target word reaches lexical identification.
Online testing holds much promise for assessing students' complex science knowledge and inquiry skills. In the current study, we examined the comparative effectiveness of assessment tasks and test items presented in online modules that used either a static, active, or interactive modality. A total of 1,836 students from the classrooms of 22 middle school science teachers in 12 states participated in the study as part of normal classroom activities. Students took assessments in the three different modalities on three consecutive days. The assessments tested key concepts about ecosystems and students' ability to use inquiry skills in an ecosystems context. Our in‐depth analyses focused on how the different modalities elicited specific content knowledge of ecosystems (e.g., producers, consumers, predator–prey relationships) and specific inquiry skills (e.g., designing and interpreting experiments). We also investigated student use of technology supports, such as replaying animations or inspecting graphs. The results showed that the interactive modality enabled the testing of more complex reasoning and that additional experience working in the online environment improved student performance for all the modalities, especially for the interactive modality. Each of the three modalities provided useful information about students' understanding of ecosystems and related inquiry skills as well as their misconceptions. The study begins to build a knowledge base of what types of science knowledge and skills may be effectively measured in three different modalities of online assessment. © 2014 Wiley Periodicals, Inc. J Res Sci Teach 51: 523–554, 2014
Increasingly, student work is being conducted on computers and online, producing vast amounts of learning‐related data. The educational analytics fields have produced many insights about learning based solely on tutoring systems' automatically logged data, or “log data.” But log data leave out important contextual information about the learning experience. For example, a student working at a computer might be working independently with few outside influences. Alternatively, he or she might be in a lively classroom, with other students around, talking and offering suggestions. Tools that capture these other experiences have potential to augment and complement log data. However, the collection of rich, multimodal data streams and the increased complexity and heterogeneity in the resulting data pose many challenges to researchers. Here, we present two empirical studies that take advantage of multimodal data sources to enrich our understanding of student learning. We leverage and extend quantitative models of student learning to incorporate insights derived jointly from data collected in multiple modalities (log data, video, and high‐fidelity audio) and contexts (individual vs. collaborative classroom learning). We discuss the unique benefits of multimodal data and present methods that take advantage of such benefits while easing the burden on researchers' time and effort.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.