Sommer Research

Team TMS

Transcranial magnetic stimulation (TMS) is a safe, non-invasive form of neuromodulation in which a coil is placed near the head to produce a transient magnetic field that, in turn, induces a time-varying electric field in the brain. Although TMS is approved by the FDA for treatment of depression and migraine and is used widely in cognitive research, its underlying biological mechanisms of action are still poorly understood. Our lab investigates the neural basis of TMS effects to establish principles for rational design of its clinical applications. Our approach is to study the effects of various TMS protocols on single neurons and circuits in the rhesus macaque brain.
Contact: Raveena Kothare 

Team Metacognition

Each decision we make is affected by our past choices and can influence our future choices. This process of linking decisions across time, called metacognition, contributes to our train of thought and, more generally, to cognitive continuity. We aim to discover and characterize the neural circuitry that mediates metacognition in the primate brain. To do this, we record from small populations of neurons during tasks that require metacognition to link one decision to another. The data inform probabilistic, generative models of behavior. Taken together, the biological and computational approaches reveal the interplay between visual perception, memory, planning, and decision-making that results in metacognition.
Contact: Zack Abzug 

Team Corollary Discharge

The image of the world projected onto our retinas is jumpy because we frequently make rapid eye movements called saccades. But, somehow, the brain transforms this chaotic information into a continuous, stable percept. A key factor in this process is the relay of eye movement information, or corollary discharge, to the visual system. We study circuits for corollary discharge and their impact on visual processing using a combination of psychophysics, neural recordings, and computational robotics. In addition to revealing a fundamental component of visual perception, the results inform methods for stabilizing information in systems that use mobile cameras and other sensors.
Contact: Divya Subramanian 

Team Satisficing

In real-world situations, humans often make decisions heuristically, that is, they apply rules of thumb that are satisfactory and suffice for the problem at hand. Such “satisficing” decision-making strategies are often quite successful when subjects face noisy information, time pressure, or other challenges that limit information processing. We investigate how humans and non-human primates choose and utilize heuristic decision-making strategies through psychophysical and neurophysiological experiments. Some work is conducted in the Duke immersive Virtual Environment (DiVE) to investigate satisficing in naturalistic conditions. The overall goal is to model the adaptive paradigms that characterize satisficing for implementation in autonomous systems.
Contact: Anthony Alers 

Team Optogenetics

Using viral methods and gene transfection, we are developing novel methods to control the activity of single neurons and specific neuronal populations in the primate brain. The methods include classical optogenetics, new combinations of bioluminescence with light-sensitive opsins, and viral tract-tracing. Our goal is to modulate the activity of very specific neuronal populations and circuits to determine their roles in cognition and behavior.
Contact: Martin Bohlen 

Team Robotics

Why do we perceive the visual world as stable? The information provided by our eyes is jumpy, because we make rapid, frequent eye movements called saccades. The result is much like a movie filmed with a hand-held video camera (but even worse). Somehow, the brain transforms this chaotic information into a stable percept. One idea is that the stabilization depends on a special class of visually-responsive neurons in the brain. They “sneak a peek” at the part of the visual scene that they will see after the saccade, an operation called presaccadic remapping. A direct test of this idea is nearly impossible; one would have to find all such neurons, silence them, and see if visual perception becomes jumpy when the eyes move. We therefore followed the dictum, “To understand a system, you must try to make it”. We built a system that uses video cameras for eyes, a computer model for a brain, and robots that use the model to guide their arms. We will train the system until robots reach and grab objects accurately even as their cameras/eyes move. After training, we will examine the simulated neurons in the model. If presaccadic remapping is necessary for stabilizing visual inputs, the trained neurons should exhibit the property. We could then computationally manipulate those neurons to understand how they promote visual stability. The robotic system that we develop should be useful for solving myriad problems in neuroscience that are beyond the reach of current biological methods.