Humans can visually estimate the mechanical properties of deformable objects (e.g. cloth stiffness). While much of the recent work on material perception has focused on static image cues (e.g., textures and shape), little is known whether humans can integrate information over time to make a judgment. Here, we investigate the effect of spatiotemporal information across multiple frames (multi-frame motion) on estimating the bending stiffness of cloth. Using high-fidelity cloth animations, we first examined how the perceived bending stiffness changed as a function of the physical bending stiffness defined in the simulation model. Using maximum likelihood difference scaling methods (MLDS) we found that the perceived stiffness and the physical bending stiffness were highly correlated. A second experiment in which we scrambled the frame sequences diminished this correlation. This suggests that multi-frame motion plays an important role. To provide further evidence for this finding, we extracted dense motion trajectories from the videos across 15 consecutive frames and used the trajectory descriptors to train a machine-learning model with the measured perceptual scales. The model can predict human perceptual scales in new videos with varied winds, optical properties of cloth, and scene setups. When the correct multi-frame was removed (using either scrambled videos or 2-frame optical flow to train the model), the predictions significantly worsened. Our findings demonstrate that multi-frame motion information is important for both humans and machines to estimate the mechanical properties. In addition, we show that dense motion trajectories are effective features to build a successful automatic cloth estimation system.
Humans can visually estimate the mechanical properties of deformable objects (e.g., cloth stiffness). While much of the recent work on material perception has focused on static image cues (e.g., textures and shape), little is known about whether humans can integrate information over time to make a judgment. Here we investigated the effect of spatiotemporal information across multiple frames (multiframe motion) on estimating the bending stiffness of cloth. Using high-fidelity cloth animations, we first examined how the perceived bending stiffness changed as a function of the physical bending stiffness defined in the simulation model. Using maximum-likelihood difference-scaling methods, we found that the perceived stiffness and physical bending stiffness were highly correlated. A second experiment in which we scrambled the frame sequences diminished this correlation. This suggests that multiframe motion plays an important role. To provide further evidence for this finding, we extracted dense motion trajectories from the videos across 15 consecutive frames and used the trajectory descriptors to train a machine-learning model with the measured perceptual scales. The model can predict human perceptual scales in new videos with varied winds, optical properties of cloth, and scene setups. When the correct multiframe was removed (using either scrambled videos or two-frame optical flow to train the model), the predictions significantly worsened. Our findings demonstrate that multiframe motion information is important for both humans and machines to estimate the mechanical properties. In addition, we show that dense motion trajectories are effective features to build a successful automatic cloth-estimation system.
Cloth is a common material, and humans can visually estimate its mechanical properties by observing how it deforms under external forces. Here, we ask whether and how dynamic deformation can affect the perception of mechanical properties of cloth. In Experiment 1, we find that both intrinsic mechanical properties and optical properties affect stiffness perception when the stimuli are presented as images. By contrast, in videos, humans can partially discount the effect of optical appearances and exhibit higher sensitivity to stiffness. We further identified an idiosyncratic deformation pattern (i.e., movement uniformity) to differentiate stiffness, which can be reliably measured by six optical flow features. In Experiment 2, we isolate the deformation by creating dynamic dot stimuli from the 3-D mesh of the cloth. We directly alter the movement pattern by manipulating the uniformity of the displacement vectors on the dot stimuli and show that changing the pattern of dynamic deformation alone can alter the perceived stiffness of cloth in a variety of scene setups. Furthermore, by analyzing optical flow fields extracted from the manipulated dynamic dot stimuli, we confirmed the same six optical flow features can be diagnostic of the degree of stiffness of moving cloth across different scenes. Overall, our study demonstrates that manipulating patterns of dynamic deformation alone can elicit the impression of cloth with varying stiffness, suggesting that the human visual system might rely on the idiosyncratic pattern of dynamic deformation for estimating stiffness.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.