Research Topics

Statistical and causal inquiry into the neural code

Research by Huk and collaborators investigates how neurons represent aspects of the environment, and how such signals can be used to make inferences about the environment for the purposes of informing decisions and guiding actions. This research involves building statistical models of the neural responses, complemented by causal perturbations of neural activity. Broadly, this is an empirical approach towards understanding the neural code in cortex, i.e., what do action potentials and spike trains mean, given their location in the circuit and within a particular behavioral context?

Capacity, robustness, and error correction

How are neural codes structured to simultaneously achieve a high capacity and robustness to noise? What are the constraints on coding capacity if the network dynamics are responsible for error correction? How do the imperatives of error-correction and compression interact to shape neural representations? The Fiete group studies these questions in the context of the grid cell code and high-capacity Hopfield-like network dynamics. Our collaborators include Ngoc Tran’s group.

Predicting states of the hippocampal network

Why does the rhythmic state of the hippocampal network vary over time? The Colgin Lab employs statistical models to estimate how hippocampal rhythms depend on an animal’s prior experience or current behavior.

Neural processing of noisy inputs

Neurons are inherently noisy.  This means the inputs to each neural system are made somewhat ambiguous or unreliable because of neural noise.  What sorts of adaptations exist in neural system to cope with and improve performance in the face of noisy and unreliable inputs?  Large-scale computer simulations of the cerebellum are used by the Mauk lab to investigate these questions in the cerebellum, a brain system well-enough understood to address such advanced questions.  Recent discoveries point to highly adaptive processes implemented by the cerebellum to ensure that noisy inputs do not translate into non-adaptive outputs.  These mechanisms involve feedback, a ubiquitous aspect of neural architectures that is poorly understood.

Is the brain optimal? How does it cope with noise?

The brain is a computation machine, capable of encoding, storing and retrieving information. At the same time, the brain is made up of noisy neurons, and this adversely affects its performance. How does the brain cope with noise? How do neurons encode information? How optimal is the neural code from an information theoretic perspective? These are the questions at the heart of Ngoc Tran’s research. Answering them will help us better understand the brain, and potentially uncover new roles for neurons seen in experiments. Currently Ngoc is working on these questions for grid cells. In mammals, grid cells encode the animal’s two-dimensional location with a set of periodic spatial firing pattern of different periods. Dubbed as the brain’s ‘inner GPS’, their discovery led to the 2014 Nobel prize in medicine. However, grid cells’ theoretical performance is extremely sensitive to noise. In a recent work, Ila Fiete and Ngoc Tran have built a biologically plausible grid cell decoder with optimal performance.

Reasoning and probabilistic inference in the brain

How does the brain solve tasks that require reasoning and probabilistic inference? The Fiete group considers these large questions in the specific context of spatial reasoning and inference, with the goal of relating neural representations of space to the spatial computations these circuits perform.

Image-computable models of neural representation

Sensory neurons represent information about the environment by discharging spikes in a stimulus-selective manner. This selectivity arises from the interplay of multiple biophysical mechanisms, typically operating within a complex hierarchy. To understand the computational significance of these operations in the primate visual system, the Goris-lab builds image-computable models of neural representation. These models are simple enough to offer a meaningful understanding of the computational principles underlying functional properties of individual neurons, yet complex enough to be tested against any visual stimulus, including natural images.

Identification in natural scenes

How can the brain identify known signals under natural conditions where the properties of the background and the amplitude of the signal are unknown from one occasion to the next? Bill Geisler and collaborators measure the statistical properties of natural backgrounds that are relevant for specific tasks such as object identification and then determine what neural computations would be optimal for performing those tasks. The scene statistics and optimal computations provide principled hypotheses that are tested in neural and behavioral experiments.

Synchrony in stochastic spiking neural networks

Neural systems propagate information via neuronal networks that transform sensory input into distributed spiking patterns, and dynamically process these patterns to generate behaviorally relevant responses. The presence of noise at every stage of neural processing imposes serious limitation on the coding strategies of these networks. In particular, coding information via spike timings, which presumably achieves the highest information transmission rate, requires neural assemblies to exhibit high level of synchrony. Thibaud Taillefumier and collaborators are interested in understanding how synchronous activity emerges in modeled populations of spiking neurons, focusing on the interplay between driving inputs and network structure. Their approach relies on methods from Markov chain, point processes, and diffusion processes theories, in combination with exact event-driven simulation techniques. The ultimate goal is two-fold: 1) to identify the input/structure relations that optimize information transmission capabilities and 2) to characterize the “physical signature’’ of such putative optimal tunings in recorded spiking activity.

Hippocampal rhythms and neuronal coding

How do neurons in the entorhinal-hippocampal network code information during different rhythmic states? The Colgin Lab views distinct hippocampal rhythms as windows into different memory processing states. With this viewpoint in mind, the Colgin Lab uses Bayesian reconstruction methods to decode activity of ensembles of hippocampal neurons during different types of rhythms. In addition, the Colgin Lab collaborates with Ila Fiete to test predictions of the attractor network model of grid field formation using recordings of ensembles of grid cells in the medial entorhinal cortex during different network states.

Stochastic neural dynamics

How can neural networks reliably process information in spite of biological noise? Can the same neural assemblies exhibit different coding strategies? How do network structure and input drive combine to explain the adoption of a given coding strategy? More fundamentally, can a meaningful neural computation be distinguished from—perhaps irrelevant— spontaneous neural activity? In other words, do neural computations have a physical, observable signature?

Neural coding and collective dynamics

The elementary computations of neural networks are understood on a physical and a chemical level. In the brain, neural networks process information by propagating all-or-none action potentials that are converted probabilistically at synapses between neurons. By contrast, the nature of neural computation at the network level – where thoughts are believed to emerge – remains largely mysterious. Do action potentials only “make sense” in the context of collective spiking patterns? Which spatiotemporal patterns constitute a “meaningful computation”? What neural codes make these computations possible in spite of biological noise?

Rehabilitation in bilingual aphasia

In this project, Risto Miikkulainen uses artificial neural networks to model individual bilingual patients whose lexical performance is impaired following an ischemic stroke. The model consists of a self-organizing map for the semantics of words, and a self-organizing map for their phonological forms, connected with associated connections. The model is trained to match the patient’s language history, damaged to match their post-stroke impairment, and then used to search for the most effective rehabilitation recipe. This model is currently tested in an NIH-funded clinical trial—to our knowledge, the first artificial neural network model to be tested in this role.

Unsupervised methods for determining the functional organization of the human speech cortex

To process speech, the brain must transform a series of low level acoustic inputs to higher order linguistic categories, such as phonemes, words, and narrative meaning. This involves being able to encode acoustic features that happen at very fast timescales as well as information that builds up over long periods of time.  How this process is functionally organized into cortical circuits is not well understood.  Liberty Hamilton and colleagues showed that, by applying unsupervised methods to neural recordings of people listening to naturally spoken sentences, they could uncover an organization of the auditory cortex and surrounding areas into two spatially and functionally distinct modules: a posterior area that detects fast onsets important for segmenting the beginning of sentences and phrases, and a slower anterior area that responds in a sustained manner throughout sentences. The Hamilton lab is now applying similar unsupervised methods to look at changes in functional organization during brain development in children with epilepsy.  They also apply computational models to analyze which particular sound features are represented in the brain, and how areas functionally interact during natural speech perception and production.

Cortical spike multiplexing using gamma frequency latencies

The Poisson statistics of cortical action potentials has been seen as a basic model of signal representation and proposed as a method of communicating Bayesian statistics. However these views are increasingly difficult to integrate with spike timing signals in the gamma frequency spectrum. Dana Ballard and Ruahan Zhang showed in simulation that the two sets of observations can be reconciled if gamma frequency action potentials can be seen as a general purpose method of modulating fast communication in cortical networks that use phase delays as the communicated signal. Such a representation allows faster computation and much more compact representations than traditional Poisson spiking models. Poisson spike distributions can be understood as a correlate of the more basic gamma phase coding model that can mix several independent computations.

Timing and temporal coding

How do brain systems tell time and implement temporal coding when the timing of outputs or the timing of learned responses is an essential aspect of neural computation?  The Mauk lab uses eyelid conditioning to study cerebellar mechanisms of timing and temporal coding.  These studies use large-scale computer simulations (millions of neurons) to investigate the properties of stimulus-temporal coding and how neural systems implement temporal codes.