Whether you are driving down a neighborhood street or hiking through a wildlife-filled national park, your environment is filled with objects, each of which vary in size. Objects are integral to human behavior because we often need to interact with them, use them as tools, or tell other people where to find them. As such, it is no surprise that the human visual attention system is object-selective. However, because objects vary in size, shape, and distance, distal far objects often appear distorted in size compared to nearby smaller objects, and these discrepancies lead to illusions. For example, a stop sign at an intersection looks larger than a car at the end of the block, or you can take extend a palm of your hand and will appear larger than the moon. However, a nearby stop-sign is smaller than a car in the distance, and the moon over the Grand Canyon is a larger than your hand, despite how it might appear in artsy social media pictures. These illusions are the result of a conflict between the actual and the perceived size of objects. But how do we deal with this discrepancy, and does it shape how we interact with objects?
Due to their importance in visual perception, objects have been at the center of vision research for decades. Traditionally, however, vision researchers have often treated objects of varying size equally, independent of their size in the real world, or instead relied on simple object such as rectangles, which have no canonical size, as placeholders for objects people encounter in their everyday lives. In our lab, until recently, we have been using such placeholder simple rectangle as proxies for objects to elucidate how simple object structure guides attentional selection.
During one of our lab meetings, my coauthors and I were designing an experiment to test how attention was deployed across objects – but this time, we planned to use real-world objects! Would attention in real-world object be deployed in a similar fashion to rectangles? However, we swiftly realized in a mock-up of the experiment, one object would frequently differ from the other in real-world size, a factor we had not considered! We went through the literature, and quickly found findings from the Konkle lab that certain visual brain areas are most strongly activated in response to large, rectangular objects compared to smaller, curvier objects, but we did not know how this would affect behavior.
This is when the ‘aha’ moment happened. Wait… could participants’ knowledge of object size actually affect attention within object images, even if they were exactly the same size on the screen? We did not know precisely how object size affected attention within objects; this was a novel and important question with ramifications for much of the vision and attention community. We decided that we would present an image of a single object in the center of the screen, attract attention to one end of the object, and measure how fast participants responded to targets presented within the object. If real-world objects influenced attention, vision researchers could no longer treat objects, especially those of equal size, equivalently. Also, this would give rise to a mechanism by which the brain maintains size constancy of objects.
Our results were telling: in general, attention was less efficiently deployed within objects that were physically larger in the real world, and this effect scaled with increasing object size. Not only did physical object size influence perception and object processing, but it also influenced how people engaged with their environment. This was awesome – we had demonstrated that prior knowledge of object size influenced not just object processing, but also how these objects were perceived and interacted with!
Prior knowledge of object size is critical to a coherent object and scene representations. We know that we can only pay attention to a small subset of information around us, and size is one of the factors that determines how we engage with a particular object. Expectations of object size impact important visual searches people perform every day, from radiologists looking for tumors and air traffic controllers tracking multiple planes, to searching for an application on a smartphone, or finding the milk in the back of the fridge. How we process and engage with objects affects our everyday lives and productivity, and size-based attentional scaling aids those tasks.
Post Image created by Anna Freerksen (http://www.annafreerksen.com/)