“…Medial-axis representations have gained currency in computer vision research (e.g., Bai & Latecki, 2008; Liu & Geiger, 1999; Sebastian & Kimia, 2005; Siddiqi & Pizer, 2008; Zhu & Yuille, 1996). In addition, results from behavioral and functional neuroimaging studies have been adduced in support of the view that medial axes play a role in shape perception (e.g., Firestone & Scholl, 2014; Harrison & Feldman, 2009; Hung, Carlson, & Connor, 2012; Kovács et al, 1998; Lee et al, 1994; Lescroart & Biederman, 2013; Lowet et al, 2018; Palmer & Guidi, 2011; van Tonder et al, 2002; Wilder et al, 2011).…”
A central goal in research on visual perception is to understand how the visual system represents the shapes of objects. According to many theorists, axes defined on the basis of object geometry provide a coordinate system for representing the locations and orientations of object parts. An important question that has received little attention concerns how object axes are defined—that is, what aspects of object geometry determine how axes are assigned to shapes? We evaluated 2 hypotheses. According to the elongated-part hypothesis, axes are defined on the basis an object’s most elongated part, such that, for example, the principal axis for a hatchet would coincide with the long axis of the hatchet’s handle. In contrast, the global-shape hypothesis holds that axes are defined on the basis of an object’s overall shape (e.g., for the hatchet, as the longest axis that spans the entire hatchet). Using a novel paradigm involving analysis of mirror-image confusions, we obtained evidence strongly supporting the elongated-part hypothesis. Our results also point to a role for secondary as well as principal axes in object shape representation.
“…Medial-axis representations have gained currency in computer vision research (e.g., Bai & Latecki, 2008; Liu & Geiger, 1999; Sebastian & Kimia, 2005; Siddiqi & Pizer, 2008; Zhu & Yuille, 1996). In addition, results from behavioral and functional neuroimaging studies have been adduced in support of the view that medial axes play a role in shape perception (e.g., Firestone & Scholl, 2014; Harrison & Feldman, 2009; Hung, Carlson, & Connor, 2012; Kovács et al, 1998; Lee et al, 1994; Lescroart & Biederman, 2013; Lowet et al, 2018; Palmer & Guidi, 2011; van Tonder et al, 2002; Wilder et al, 2011).…”
A central goal in research on visual perception is to understand how the visual system represents the shapes of objects. According to many theorists, axes defined on the basis of object geometry provide a coordinate system for representing the locations and orientations of object parts. An important question that has received little attention concerns how object axes are defined—that is, what aspects of object geometry determine how axes are assigned to shapes? We evaluated 2 hypotheses. According to the elongated-part hypothesis, axes are defined on the basis an object’s most elongated part, such that, for example, the principal axis for a hatchet would coincide with the long axis of the hatchet’s handle. In contrast, the global-shape hypothesis holds that axes are defined on the basis of an object’s overall shape (e.g., for the hatchet, as the longest axis that spans the entire hatchet). Using a novel paradigm involving analysis of mirror-image confusions, we obtained evidence strongly supporting the elongated-part hypothesis. Our results also point to a role for secondary as well as principal axes in object shape representation.
We explore the concept of abstraction as it is used in visualization, with the ultimate goal of understanding and formally defining it. Researchers so far have used the concept of abstraction largely by intuition without a precise meaning. This lack of specificity left questions on the characteristics of abstraction, its variants, its control, or its ultimate potential for visualization and, in particular, illustrative visualization mostly unanswered. In this paper we thus provide a first formalization of the abstraction concept and discuss how this formalization affects the application of abstraction in a variety of visualization scenarios. Based on this discussion, we derive a number of open questions still waiting to be answered, thus formulating a research agenda for the use of abstraction for the visual representation and exploration of data. This paper, therefore, is intended to provide a contribution to the discussion of the theoretical foundations of our field, rather than attempting to provide a completed and final theory.
“…Since the configuration is defined by the smooth transformation of a virtual polygon, these results shed light on how vision persistently represents 3D rigid shapes with various deformations. Interestingly, studying deformable shapes is also a major research topic in the computer vision community (e.g., Siddiqi & Pizer, 2008). The challenge of representing a shape persistently is that the same 3D object’s projection onto 2D images can change dramatically due to the motion of the object or the observer.…”
Visual working memory is highly sensitive to global configurations in addition to the features of each object. When objects move, their configuration varies correspondingly. In this study, we explored the geometric rules governing the maintenance of a dynamic configuration in visual working memory. Our investigation is guided by Klein's Erlangen program, a hierarchy of geometric stability that includes affine, projective, and topological invariants. In a change-detection task, memory displays were categorized by which geometric invariance was violated by the objects' motions. The results showed that (a) there was no decrement in memory performance until the projective invariance was violated, (b) more dramatic changes (such as a topological change) did not further enlarge the decrement, and (c) objects causing the violation of projective invariance were better encoded into memory. These results collectively demonstrate that projective invariance is the only geometric property determining the maintenance of a dynamic configuration in visual working memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.