A head-camera was used to examine the visual correlates of object name learning by toddlers, as they played with novel objects, and as the parent spontaneously named those objects. The toddlers’ learning of the object names was tested after play and the visual properties of the head-camera images during naming events associated with learned and unlearned object names were analyzed. Naming events associated with learning had a clear visual signature, one in which the visual information itself was clean and visual competition among objects was minimized. Moreover, for learned object names the visual advantage of the named target over competitors was sustained, both before and after the heard name. The findings are discussed in terms of the visual and cognitive processes that may depend on clean sensory input for learning and also on the sensory-motor, cognitive and social processes that may create these optimal visual moments for learning.
Human toddlers learn about objects through second-by-second, minute-by-minute sensory-motor interactions. In an effort to understand how toddlers' bodily actions structure the visual learning environment, mini-video cameras were placed low on the foreheads of toddlers, and for comparison also on the foreheads of their parents, as they jointly played with toys. Analyses of the head camera views indicate visual experiences with profoundly different dynamic structures. The toddler view often consists of a single dominating object that is close to the sensors and thus that blocks the view of other objects such that individual objects go in and out of view. The adult view, in contrast, is broad and stable, with all potential targets continually in view. These differences may arise for several developmentally relevant reasons, including the small visuo-motor workspace of the toddler (short arms) and the engagement of the whole body when actively handling objects.
Two experiments examined developmental changes in children's visual recognition of common objects during the period of 18 to 24 months. Experiment 1 examined children's ability to recognize common category instances that presented three different kinds of information: (1) richly detailed and prototypical instances that presented both local and global shape information, color, textural and featural information, (2) the same rich and prototypical shapes but no color, texture or surface featural information, or (3) that presented only abstract and global representations of object shape in terms of geometric volumes. Significant developmental differences were observed only for the abstract shape representations in terms of geometric volumes, the kind of shape representation that has been hypothesized to underlie mature object recognition. Further, these differences were strongly linked in individual children to the number of object names in their productive vocabulary. Experiment 2 replicated these results and showed further that the less advanced children's object recognition was based on the piecemeal use of individual features and parts, rather than overall shape. The results provide further evidence for significant and rapid developmental changes in object recognition during the same period children first learn object names. The implications of the results for theories of visual object recognition, the relation of object recognition to category learning, and underlying developmental processes are discussed.
Object recognition depends on the seen views of objects. These views depend in part on the perceivers’ own actions as they select and show object views to themselves. The self-selection of object views from manual exploration of objects during infancy and childhood may be particularly informative about the human object recognition system and its development. Here, we report for the first time on the structure of object views generated by 12 to 36 month old children (N = 54) and for comparison adults (N = 17) during manual and visual exploration of objects. Object views were recorded via a tiny video camera placed low on the participant’s forehead. The findings indicate two viewing biases that grow rapidly in the first three years: a bias for planar views and for views of objects in an upright position. These biases also strongly characterize adult viewing. We discuss the implications of these findings for a developmentally complete theory of object recognition.
The degree to which romantic partners' autonomic responses are coordinated, represented by their pattern of physiological synchrony, seems to capture important aspects of the reciprocal influence and co‐regulation between spouses. In this study, we analyzed couple's cardiac synchrony as measured by heart rate (HR) and heart rate variability (HRV). A sample of 27 couples (N = 54) performed a structured interaction task in the lab where they discussed positive and negative aspects of the relationship. During the interaction, their cardiac measures (HR and HRV) were recorded using the BIOPAC System. Additional assessment, prior to the lab interaction task, included self‐report measures of empathy (Interpersonal Reactivity Index and Interpersonal Reactivity Index for Couples) and relationship satisfaction (Revised Dyadic Adjustment Scale). Synchrony computation was based on the windowed cross‐correlation of both partner's HR and HRV time series. In order to control for random synchrony, surrogate controls were created using segment‐wise shuffling. Our results confirmed the presence of cardiac synchrony during the couple's interaction when compared to surrogate testing. Specifically, we found evidence for negative (antiphase) synchrony of couple's HRV and positive (in‐phase) synchrony of HR. Further, both HRV and HR synchronies were associated with several dimensions of self‐report data. This study suggests that cardiac synchrony, particularly, the direction of the covariation in the partners' physiological time series, may have an important relational meaning in the context of marital interactions.
We measured turn-taking in terms of hand and head movements and asked if the global rhythm of the participants' body activity relates to word learning. Six dyads composed of parents and toddlers (M = 18 months) interacted in a tabletop task wearing motion-tracking sensors on their hands and head. Parents were instructed to teach the labels of 10 novel objects and the child was later tested on a name-comprehension task. Using dynamic time warping, we compared the motion data of all body-part pairs, within and between partners. For every dyad, we also computed an overall measure of the quality of the interaction, that takes into consideration the state of interaction when the parent uttered an object label and the overall smoothness of the turn-taking. The overall interaction quality measure was correlated with the total number of words learned.In particular, head movements were inversely related to other partner's hand movements, and the degree of bodily coupling of parent and toddler predicted the words that children learned during the interaction. The implications of joint body dynamics to understanding joint coordination of activity in a social interaction, its scaffolding effect on the child's learning and its use in the development of artificial systems are discussed.
The problem of supplier selection can be easily modeled as a multiple-criteria decision making (MCDM) problem: businesses express their preferences with respect to suppliers, which can then be ranked and selected. This approach has two major pitfalls: first, it does not consider a dynamic scenario, in which suppliers and their ratings are constantly changing; second, it only addressed the problem from the point of view of a single business, and cannot be easily applied when considering more than one business. To overcome these problems, we introduce a method for supplier selection that builds upon the dynamic MCDM framework of Campanella and Ribeiro [1] and, by means of a linear programming model, can be used in the case of multiple collaborating businesses planning their next batch of orders together.
An important goal in studying both human intelligence and artificial intelligence is to understand how a natural or an artificial learning system deals with the uncertainty and ambiguity of the real world. For a natural intelligence system such as a human toddler, the relevant aspects in a learning environment are only those that make contact with the learner's sensory system. In real-world interactions, what the child perceives critically depends on his own actions as these actions bring information into and out of the learner's sensory field. The present analyses indicate how, in the case of a toddler playing with toys, these perception-action loops may simplify the learning environment by selecting relevant information and filtering irrelevant information. This paper reports new findings using a novel method that seeks to describe the visual learning environment from a young child's point of view and measures the visual information that a child perceives in real-time toy play with a parent. The main results are 1) what the child perceives primarily depends on his own actions but also his social partner's actions; 2) manual actions, in particular, play a critical role in creating visual experiences in which one object dominates; 3) this selecting and filtering of visual objects through the actions of the child provides more constrained and clean input that seems likely to facilitate cognitive learning processes. These findings have broad implications for how one studies and thinks about human and artificial learning systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.