Our visual system can be thought of as a hierarchy, with each subsequent brain area processing increasingly complex aspects of our visual world at any one moment. However, in addition to this feedforward process, there are a great deal of connections that feed back information from later areas to earlier ones. This feedback is thought to convey important context information that allows us to prioritise certain parts of the visual scene in line with our knowledge or expectations about the specific situation we find ourselves in.
Consider the following example – You want to wear a particular pair of red socks. You’ve found the left one, but the right one is somewhere in your messy sock drawer. Rather than going through checking every sock individually, you can narrow your search by concentrating on a particular feature of the missing sock: it’s colour. By focusing on the colour red, a feature-selective feedback signal is sent to all the visual neurons which respond most to red, giving them a boost in activity, and making all the red socks in the drawer appear more obvious to you (Fig. 1b, left).
But say you have many different red socks in the drawer, and you need a quicker way of finding the correct one. What other information could you use? If you know you normally put these specific socks on the right side of the drawer, you can concentrate your search there. Here spatially-based feedback would boost the activity of all visual neurons which respond to the right region of your field of vision, improving your focus on socks there and allowing you to better ignore everything on the left side of the drawer (Fig. 1b, right).
Whilst the neural mechanisms behind this focus on specific features like colour and spatial location are relatively well understood, it is unclear how these selective feedback processes interact when the visual input is ambiguous and our visual system must make decisions in order to interpret elements of the scene. For our sock example, what happens when there are several socks on the right side of the drawer, in similar colours? If you focus on a particular sock, you may ask whether its colour is closer to red than say orange (Fig. 1c, left). During such a decision, your focus is on two features simultaneously. First, on the right side of the drawer, leading to the spatially-based feedback we described before boosting neurons that respond well to the right side. Second, the colour of the sock, either more red or more orange depending on the decision you’re coming to. Importantly, during this process, a type of feedback leads to activity that is predictive of your decision - such that by measuring the activity of these neurons one could make a good guess of whether you’ll decide red or orange, even if the colour is really somewhere in between. Such decision-related activity could result from the feature-selective feedback we mentioned before. Specifically, if you start deciding the sock is more red, then your focus on this colour could boost the activity of neurons that respond most to red. This would result in stronger activity before a red decision and weaker before an orange decision, meaning that if we know which colour the neurons respond more to, we can predict the decision on the strength of the activity.
But just how selective is this feedback - does it only ever target the neurons needed in the current moment? If feedback were unlimited in how selective it was, the brain would require infinite connections for all the possible combinations of visual features we might encounter (not just those in your sock drawer!). Alternatively, it could be that the brain makes use of a limited number of feedback mechanisms that use the same connections for different tasks. For example, when we’re searching the drawer for any old red sock, it makes sense that the related feedback is sent to all neurons which like red. However when we’re focussed on the red socks on the right side of the drawer, we only need those neurons that care about red and the right side to make the decision. If feedback was fully selective we would therefore not expect it to target any other neurons than the ones used to make the decision. On the other hand, if we make use of the same mechanism as that which helped us find red socks anywhere in the drawer, then this feedback would still target all neurons that like red, regardless of the spatial location they represent (Fig. 1c, right).
As seen in our recent paper in Nature Communications, we tested these alternatives in the lab by getting monkeys to make visual decisions about 3D objects. The animal’s task was to decide whether an object on the screen was more convex or concave. At the same time, a second similar looking object was presented on the other side of the screen, which should not be used for the decision. The relevant features for this task were thus the depth and location of the object. Once we were sure the animals used the correct object for their decision and ignored the other, we measured the activity of visual neurons that care about the location and depth of the ignored object. Although these neurons were not used to make the decision, we still found that they received feedback related to the decision the animal made about the depth of the relevant object. Thus, the decision-related feedback appeared to target neurons which care about the depth the animal chose, regardless of whether they represented the relevant location.
That we find decision-related feedback to be spatially-unselective even when the task demands spatial selectivity has several important implications. Firstly, it provides evidence that this type of feedback could indeed result from the same mechanism that gives rise to feature-selective feedback, which likewise acts in such a widespread manner. Secondly, it suggests that feedback to visual cortex may be limited in its selectivity. Although the reason for this remains unclear, it could be that this represents an advantage for the visual system rather than a drawback. Reusing established feedback connections could for example allow us to more quickly adapt to the changing demands of our environment without having to reconfigure our visual system and connections with each new scenario.
Illustration by Steve Quinn.