Our method generates multi-view depth maps and silhouettes, and uses a rendering function to obtain the 3D shapes. Right: We can also extend our framework to reconstruct 3D shapes from single/multi-view depth maps or silhouettes.
We typically think of intuitive physics in terms of high-level cognition, but might aspects of physics also be extracted during lower-level visual processing? Might we not only think about physics, but also see it? We explored this using multiple tasks in online adult samples with objects covered by soft materials—as when you see a chair with a blanket draped over it—where you must account for the physical interactions between cloth, gravity, and object. In multiple change-detection experiments ( n = 200), observers from an online testing marketplace were better at detecting image changes involving underlying object structure versus those involving only the superficial folds of cloths—even when the latter were more extreme along several dimensions. And in probe-comparison experiments ( n = 100), performance was worse when both probes (vs. only one) appeared on image regions reflective of underlying object structure (equating visual properties). This work collectively shows how vision uses intuitive physics to recover the deeper underlying structure of scenes.
This project attempts to redefine an augmented reality (AR) architectural concept of traversing the filmic space as a method in a new remote navigational interfa cing. Through Panohaptic visualization one is invited to experience a soft architectural space in an improvisational manner, connecting the physical optic world to the haptic through AR and digital filmic media. The main goal is to be able to use this interface in a real-time environment controlling film's time and space in order to manifest a new ima ginative architectural situation. Remote physical interaction is achieved using optical tracking and multi-touch control through LED gloves, i.e. visual arts and performance events. This interface is creating a new topology that exists in the cinematic architecture realm attempting to bridge haptic and optic vision.
Aerial photography has been the leading method for collecting and mapping information via remote sensing from the environments such as cities. Usually the qualitative analysis of the images is performed by human observation in a form of descriptive pattern recognition and manual spatial associations. These techniques for many years have created unique means of remote sensing whether through software analysis of photographic or satellite data, however, they have always been recorded from high-altitudes using predominantly airplane, helicopter or satellite information where the resulting graphical product such as Google map is disembodied and detached from the real visible qualities which are evident in human scale. These maps reduce the city spaces into densities and statistics (Penz 2010). The purpose of this study is to reuse this already established technology in a way to reintroduction visualisation techniques that can be useful in the perception of the city and its architectural spaces. Using cinematic mode of representation, particularly the use of moving image and still image to recognize their aesthetics, the density and other qualitative information from specifically a low level altitude where the images can be embodied revealing their sensory such as haptic and other modality qualities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.