I obtained my Ph.D. from the Department of Cognitive Sciences at the University of California, Irvine, in 1998. After spending three years as a post-doctoral researcher in the Department of Brain & Cognitive Sciences at MIT, I joined the Department of Psychology at Rutgers, New Brunswick, in 2001. I am in the Cognitive area of the Psychology Department, and a member of the Rutgers Center for Cognitive Science (RuCCS).
Research
Although a basic fact for perceptual scientists, it sometimes comes as a surprise to those have not previously considered the problem, that the world as we perceive it — including the objects, surfaces, their shapes and colors, their layout in space — is not directly 'given' to our brains, but must be painstakingly computed from the pattern of light impinging one the retinas. The phenomenological ease of our perceptual experience belies the complexity of the computations that underlie visual perception and cognition.
My research is concerned with the visual perception of objects and surfaces. I am primarily interested in how the brain represents geometric (or shape) information, and what implications this has for cognition more generally. I am also interested in how the brain generates "layered" representations of surfaces, in the contexts of partial occlusion and transparency — where two surfaces are represented along a single line of sight, one extending behind the other. My research employs a two-pronged approach involving (i) psychophysical experiments on human subjects that investigate how people perceive and represent geometric structure, and (ii) computational models — generally probabilistic in nature — that model these abilities.
Specific topics include:
(1) Part-based representation of shape: The overarching hypothesis guiding this work is that the visual system decomposes shapes into smaller units — semi-independent parts that are likely to have functional significance — describes these parts in terms of their axial structure, and organizes the entire shape's representation hierarchically, in terms of these parts and their spatial relationships. (It should be noted that this hypothesis is independent of whether or not shape representations are volumetric and/or viewpoint invariant.) Thus, rather than being template-like, the visual representation of shape is highly structured. My work addresses the question of how part-based representations are computed from an object's geometry, how shape representations are organized in terms of parts, and what implications such representations have for various perceptual tasks.
(2) Visual shape completion: Portions of object boundaries are invariably missing in the retinal images, e.g., whenever an object happens to be partly occluded behind another. Coherent objects in the world thus often project onto the retinas as disparate image fragments. Despite this we, as human observers, perceive these objects as complete and continuous — not fragmented. In order to make this possible, the brain must (i) actively group image fragments that are likely to belong to a single extended object (the grouping problem), and (ii) complete the shape of an object's boundary in the missing portions (the shape problem). My work addresses both of these problems, by investigating the geometric properties that the visual system uses to initiate shape completion, and measuring the shape that people perceive in regions where it is "missing" (due to partial occlusion, or camouflage — as in the case of "illusory contours").
(3) Perception of transparency: When an observer views an object through a partially transmissive (”transparent”) surface, the photometric contributions of two distinct surfaces along each line of sight collapse onto a single intensity value on the retina. The visual system is then faced with the problem of "unconfounding" the separate contributions of the two surfaces. In particular, it must (1) determine whether the image intensity arises from a single surface in plain view, or from two distinct surfaces---one seen through the other, and (2) assign surface attributes, such as reflectance and opacity, to the two layers. My work in transparency addresses these issues by investigating the photometric and geometric properties the visual system uses to initiate a decomposition of image intensity into two distinct surfaces, and the way in which it quantitatively assigns reflectance and opacity to the surfaces.