Today’s children have more opportunities than ever before to learn from interactive technology, yet experimental research assessing the efficacy of children’s learning from interactive media in comparison to traditional learning approaches is still quite scarce. Moreover, little work has examined the efficacy of using touch-screen devices for research purposes. The current study compared children’s rate of learning factual information about animals during a face-to-face instruction from an adult female researcher versus an analogous instruction from an interactive device. Eighty-six children ages 4 through 8 years (64% male) completed the learning task in either the Face-to-Face condition (n = 43) or the Interactive Media condition (n = 43). In the Learning Phase of the experiment, which was presented as a game, children were taught novel facts about animals without being told that their memory of the facts would be tested. The facts were taught to the children either by an adult female researcher (Face-to-Face condition) or from a pre-recorded female voice represented by a cartoon Llama (Interactive Media condition). In the Testing Phase of the experiment that immediately followed, children’s memory for the taught facts was tested using a 4-option forced-choice paradigm. Children’s rate of learning was significantly above chance in both conditions and a comparison of the rates of learning across the two conditions revealed no significant differences. Learning significantly improved from age 4 to age 8, however, even the preschool-aged children performed significantly above chance, and their performance did not differ between conditions. These results suggest that, interactive media can be equally as effective as one-on-one instruction, at least under certain conditions. Moreover, these results offer support for the validity of using interactive technology to collect data for research purposes. We discuss the implications of these results for children’s learning from interactive media, parental attitudes about interactive technology, and research methods.
For believable character animation, skin deformation should communicate important deformation effects due to underlying muscle movement. Anatomical models that capture these effects are typically constructed from the inside out. Internal tissue is modeled by hand and a surface skin is attached to, or generated from, the internal structure. This paper presents an outside-in approach to anatomical modeling, in which we generate musculature from a predefined structure, which we conform to an artist-sculpted skin surface. Motivated by interactive applications, we attach the musculature to an existing control skeleton and apply a novel geometric deformation model to deform the skin surface to capture important muscle motion effects. Musculoskeletal structure can be stored as a template and applied to new character models. We illustrate the methodology, as integrated into a commercial character animation system, with examples driven by both keyframe animation and recorded motion data.
Determining how neurons transform synaptic input and encode information in action potential (AP) firing output is required for understanding dendritic integration, neural transforms and encoding. Limitations in the speed of imaging 3D volumes of brain encompassing complex dendritic arbors in vivo using conventional galvanometer mirror-based laser-scanning microscopy has hampered fully capturing fluorescent sensors of activity throughout an individual neuron's entire complement of synaptic inputs and somatic APs. To address this problem, we have developed a two-photon microscope that achieves high-speed scanning by employing inertia-free acousto-optic deflectors (AODs) for laser beam positioning, enabling random-access sampling of hundreds to thousands of points-of-interest restricted to a predetermined neuronal structure, avoiding wasted scanning of surrounding extracellular tissue. This system is capable of comprehensive imaging of the activity of single neurons within the intact and awake vertebrate brain. Here, we demonstrate imaging of tectal neurons within the brains of albino Xenopus laevis tadpoles labeled using single-cell electroporation for expression of a red space-filling fluorophore to determine dendritic arbor morphology, and either the calcium sensor jGCaMP7s or the glutamate sensor iGluSnFR as indicators of neural activity. Using discrete, point-of-interest scanning we achieve sampling rates of 3 Hz for saturation sampling of entire arbors at 2 µm resolution, 6 Hz for sequentially sampling 3 volumes encompassing the dendritic arbor and soma, and 200-250 Hz for scanning individual planes through the dendritic arbor. This system allows investigations of sensory-evoked information input-output relationships of neurons within the intact and awake brain.
Figure 1: Defining a nonlinear projection of a 3D scene. Left: Original scene from a default view showing sketched 3D features. Right: The new, nonlinear projection. Two curve constraints are used to bow the side walls, and a point constraint is used to warp the back wall. A bstractLinear perspective is a good approximation to the format in which the human visual system conveys 3D scene information to the brain. Artists expressing 3D scenes, however, create nonlinear projections that balance their linear perspective view of a scene with elements of aesthetic style, layout and relative importance of scene objects. Manipulating the many parameters of a linear perspective camera to achieve a desired view is not easy. Controlling and combining multiple such cameras to specify a nonlinear projection is an even more cumbersome task. This paper presents a direct interface, where an artist manipulates in 2D the desired projection of a few features of the 3D scene. The features represent a rich set of constraints which define the overall projection of the 3D scene. Desirable properties of local linear perspective and global scene coherence drive a heuristic algorithm that attempts to interactively satisfy the given constraints as a weight-averaged projection of a minimal set of linear perspective cameras. This paper shows that 2D feature constraints are a direct and effective approach to control both the 2D layout of scene objects and the conceptually complex, high dimensional parameter space of nonlinear scene projection.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.