The way the world looks to us is a remarkable achievement of our visual system. The visual inputs we receive are just the two-dimensional images projected on our retinas. But from these our brain is able to construct representations of three-dimensional objects and surfaces laid out in space. Research in our lab is aimed at understanding how the human brain computes representations of objects and surfaces from the retinal images, and how it uses these representations for various tasks.
Specific topics include:
1. Shape Perception: How does the brain represent the shape of objects so that, for example, we can tell whether the shape we're seeing right now is the same as one we saw earlier? An ongoing focus of the lab is on "part-based representations" of shape, which involve decomposing complex objects into simpler parts, and then describing their shape in terms of these parts and the way they are put together.
2. Object Completion: When we see an object that is partly hidden behind another surface, we can often perceive its shape as complete. Similarly, we can see "illusory contours" where no boundary actually exists in the image. How does the brain manage to fill-in the missing portions of an object's boundary?
3. Predicting object behavior: When we see an object, our brains represent not only how the object looks right now, but also how it might behave in the near future. For example, if we see an image of a tilted vase, we can tell in a single glance whether the vase is likely to fall over and break, or to return to its upright position. How is the brain able to infer the forces that are acting on an object from just a single snapshot, thereby allowing us to predict its behavior?