Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the 'peak' of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.
It is well known that biological motion, as produced by point-light displays on a human body, gives a good representation of the represented body-eg its gender and the nature of the task which it is engaged in. The question is whether it is possible to judge the emotional state of a human body from motion information alone. An ability to make this kind of judgment may imply that people are able to perceive emotion from patterns of movement without having to compute the detailed shape first. Subjects were shown brief video clips of two trained dancers (one male, one female). The dancers were aiming to convey the following emotions: fear, anger, grief, joy, surprise, and disgust. The video clips portrayed fully lit scenes and point-light scenes, with thirteen small points of light attached to the body of each dancer. Half the stimuli were presented the right way up, while half were inverted. The subjects' task was to judge which emotion was being portrayed. Full-body clips gave good recognition of emotionality (88% correct), but the results for upright biological-motion displays were also significantly above chance (63% correct). Inversion of the display reduced biological-motion (but not full-body) performance to close to chance but still significantly above chance. A space-time analysis of the motion of the points of light was carried out, and was related to the discriminability of the different emotions. Biological-motion displays, which convey no information while static, are able to give a rich description of the subject matter, including the ability to judge emotional state. This ability is disrupted when the image is inverted.
Johansson filmed walkers and runners in a dark room with lights attached to their main joints and demonstrated that such moving light spots were perceived as human movements. To extend this finding the detection and recognition of Johansson displays of different kinds of movements under three light-spot conditions were studied to determine how human actions are perceived on the basis of biological-motion information. Locomotory, instrumental, and social actions were presented in each condition, namely in normal Johansson (light attached to joints), inter-joint (light attached between joints), and upside-down Johansson. Subjects' verbal responses and recognition times were measured. Locomotory actions were recognised better and faster than social and instrumental actions. Furthermore, biological motions were recognised much better and faster when the light-spot displays were presented in the normal orientation rather than upside down. Recognition rate was only slightly impaired under the inter-joint condition. It is argued that the perceptual analysis of actions and movements starts primarily on an intermediate level of action coding and comprises more than just the similarity of movement patterns or simple structures. Additionally, coding of dynamic phase relations and semantic coding take place at very early stages of the processing of biological motion. Implications of these results for computer vision, perceptual models, and mental representations are discussed.
Publisher's copyright statement:Additional information: Use policyThe full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-prot purposes provided that:• a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders.Please consult the full DRO policy for further details.
A series of experiments were performed to investigate how motion sequences provide information about the intentional structure of moving figures or actors. Observers had to detect simulations of biologically meaningful motion within a set of moving letters. In the first two experiments a factorial design was used, with type of instruction as a between-subject factor and six movement parameters (number of items, speed and directness of target and distractors, and 'relentlessness' of target movement) as within-subject factor; in the final two experiments, the visibility of the goal towards which the target moved and the use of a tracking movement to distinguish the target were varied. In such displays search time increases with increasing number of stimuli. It was found that (a) the more direct the motion, the more likely it was to be interpreted as intentional; (b) intentional motion was much easier to detect when the target moved faster than the distractors than when it moved more slowly; (c) recognition of intentionality was impaired but not abolished if the goal towards which the target was moving was invisible; and (d) participants did not report intentional movement when the target was distinguished by brightness rather than the manner in which it moved. We argue that the perception of intentionality is strongly related to observers' use of conceptual knowledge, which in turn is activated by particular combinations of features. This supports a process model, in which intentionality is seen as the result of a conceptual integration of objective visual features.
We report the primary sequence of TASK-4, a novel member of the acid-sensitive subfamily of tandem pore K + channels. TASK-4 transcripts are widely expressed in humans, with highest levels in liver, lung, pancreas, placenta, aorta and heart. In Xenopus oocytes TASK-4 generated K + currents displaying a marked outward rectification which was lost by elevation of extracellular K + . TASK-4 currents were efficiently blocked by barium (83% inhibition at 2 mM), only weakly inhibited by 1 mM concentrations of quinine, bupivacaine and lidocaine, but not blocked by tetraethylammonium, 4-aminopyridine and Cs + . TASK-4 was sensitive to extracellular pH, but in contrast to other TASK channels, pH sensitivity was shifted to more alkaline pH. Thus, TASK-4 in concert with other TASK channels might regulate cellular membrane potential over a wide range of extracellular pH. ß
In three experiments, pigeons were exposed to a discriminated autoshaping procedure in which categories of moving stimuli, presented on videotape, were differentially associated with reinforcement. All stimuli depicted pigeons making defined responses. In Experiment 1, one category consisted of several different scenes of pecking and the other consisted of scenes of walking, flying, head movements, or standing still. Four of the 4 birds for which pecking scenes were positive stimuli discriminated successfully, whereas only 1 of the 4 for which pecking was the negative category did so. In the pecking‐positive group, there were differences between the pecking rates in the presence of the four negative actions, and these differences were consistent across subjects. In Experiment 2, only the categories of walking and pecking were used; some but not all birds learned this discrimination, whichever category was positive, and these birds showed some transfer to new stimuli in which the same movements were represented only by a small number of point lights (Johansson's “biological motion” displays). In Experiment 3, discriminations between pecking and walking movement categories using point‐light displays were trained. Four of the 8 birds discriminated successfully, but transfer to fully detailed displays could not be demonstrated. Pseudoconcept control groups, in which scenes from the same categories of motion were used in both the positive and negative stimulus sets, were used in Experiments 1 and 3. None of the 8 pigeons trained under these conditions showed discriminative responding. The results suggest that pigeons can respond differentially to moving stimuli on the basis of movement cues alone.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.