Research Topics

Statistical and causal inquiry into the neural code

Research by Huk and collaborators investigates how neurons represent aspects of the environment, and how such signals can be used to make inferences about the environment for the purposes of informing decisions and guiding actions. This research involves building statistical models of the neural responses, complemented by causal perturbations of neural activity. Broadly, this is an empirical approach towards understanding the neural code in cortex, i.e., what do action potentials and spike trains mean, given their location in the circuit and within a particular behavioral context?

Capacity, robustness, and error correction

How are neural codes structured to simultaneously achieve a high capacity and robustness to noise? What are the constraints on coding capacity if the network dynamics are responsible for error correction? How do the imperatives of error-correction and compression interact to shape neural representations? The Fiete group studies these questions in the context of the grid cell code and high-capacity Hopfield-like network dynamics. Our collaborators include Ngoc Tran’s group.

Predicting states of the hippocampal network

Why does the rhythmic state of the hippocampal network vary over time? The Colgin Lab employs statistical models to estimate how hippocampal rhythms depend on an animal’s prior experience or current behavior.

Neural processing of noisy inputs

Neurons are inherently noisy.  This means the inputs to each neural system are made somewhat ambiguous or unreliable because of neural noise.  What sorts of adaptations exist in neural system to cope with and improve performance in the face of noisy and unreliable inputs?  Large-scale computer simulations of the cerebellum are used by the Mauk lab to investigate these questions in the cerebellum, a brain system well-enough understood to address such advanced questions.  Recent discoveries point to highly adaptive processes implemented by the cerebellum to ensure that noisy inputs do not translate into non-adaptive outputs.  These mechanisms involve feedback, a ubiquitous aspect of neural architectures that is poorly understood.

Is the brain optimal? How does it cope with noise?

The brain is a computation machine, capable of encoding, storing and retrieving information. At the same time, the brain is made up of noisy neurons, and this adversely affects its performance. How does the brain cope with noise? How do neurons encode information? How optimal is the neural code from an information theoretic perspective? These are the questions at the heart of Ngoc Tran’s research. Answering them will help us better understand the brain, and potentially uncover new roles for neurons seen in experiments. Currently Ngoc is working on these questions for grid cells. In mammals, grid cells encode the animal’s two-dimensional location with a set of periodic spatial firing pattern of different periods. Dubbed as the brain’s ‘inner GPS’, their discovery led to the 2014 Nobel prize in medicine. However, grid cells’ theoretical performance is extremely sensitive to noise. In a recent work, Ila Fiete and Ngoc Tran have built a biologically plausible grid cell decoder with optimal performance.

Reasoning and probabilistic inference in the brain

How does the brain solve tasks that require reasoning and probabilistic inference? The Fiete group considers these large questions in the specific context of spatial reasoning and inference, with the goal of relating neural representations of space to the spatial computations these circuits perform.

Image-computable models of neural representation

Sensory neurons represent information about the environment by discharging spikes in a stimulus-selective manner. This selectivity arises from the interplay of multiple biophysical mechanisms, typically operating within a complex hierarchy. To understand the computational significance of these operations in the primate visual system, the Goris-lab builds image-computable models of neural representation. These models are simple enough to offer a meaningful understanding of the computational principles underlying functional properties of individual neurons, yet complex enough to be tested against any visual stimulus, including natural images.

Identification in natural scenes

How can the brain identify known signals under natural conditions where the properties of the background and the amplitude of the signal are unknown from one occasion to the next? Bill Geisler and collaborators measure the statistical properties of natural backgrounds that are relevant for specific tasks such as object identification and then determine what neural computations would be optimal for performing those tasks. The scene statistics and optimal computations provide principled hypotheses that are tested in neural and behavioral experiments.

Synchrony in stochastic spiking neural networks

Neural systems propagate information via neuronal networks that transform sensory input into distributed spiking patterns, and dynamically process these patterns to generate behaviorally relevant responses. The presence of noise at every stage of neural processing imposes serious limitation on the coding strategies of these networks. In particular, coding information via spike timings, which presumably achieves the highest information transmission rate, requires neural assemblies to exhibit high level of synchrony. Thibaud Taillefumier and collaborators are interested in understanding how synchronous activity emerges in modeled populations of spiking neurons, focusing on the interplay between driving inputs and network structure. Their approach relies on methods from Markov chain, point processes, and diffusion processes theories, in combination with exact event-driven simulation techniques. The ultimate goal is two-fold: 1) to identify the input/structure relations that optimize information transmission capabilities and 2) to characterize the “physical signature’’ of such putative optimal tunings in recorded spiking activity.

Hippocampal rhythms and neuronal coding

How do neurons in the entorhinal-hippocampal network code information during different rhythmic states? The Colgin Lab views distinct hippocampal rhythms as windows into different memory processing states. With this viewpoint in mind, the Colgin Lab uses Bayesian reconstruction methods to decode activity of ensembles of hippocampal neurons during different types of rhythms. In addition, the Colgin Lab collaborates with Ila Fiete to test predictions of the attractor network model of grid field formation using recordings of ensembles of grid cells in the medial entorhinal cortex during different network states.

Stochastic neural dynamics

How can neural networks reliably process information in spite of biological noise? Can the same neural assemblies exhibit different coding strategies? How do network structure and input drive combine to explain the adoption of a given coding strategy? More fundamentally, can a meaningful neural computation be distinguished from—perhaps irrelevant— spontaneous neural activity? In other words, do neural computations have a physical, observable signature?

Neural coding and collective dynamics

The elementary computations of neural networks are understood on a physical and a chemical level. In the brain, neural networks process information by propagating all-or-none action potentials that are converted probabilistically at synapses between neurons. By contrast, the nature of neural computation at the network level – where thoughts are believed to emerge – remains largely mysterious. Do action potentials only “make sense” in the context of collective spiking patterns? Which spatiotemporal patterns constitute a “meaningful computation”? What neural codes make these computations possible in spite of biological noise?

Rehabilitation in bilingual aphasia

In this project, Risto Miikkulainen uses artificial neural networks to model individual bilingual patients whose lexical performance is impaired following an ischemic stroke. The model consists of a self-organizing map for the semantics of words, and a self-organizing map for their phonological forms, connected with associated connections. The model is trained to match the patient’s language history, damaged to match their post-stroke impairment, and then used to search for the most effective rehabilitation recipe. This model is currently tested in an NIH-funded clinical trial—to our knowledge, the first artificial neural network model to be tested in this role.

Unsupervised methods for determining the functional organization of the human speech cortex

To process speech, the brain must transform a series of low level acoustic inputs to higher order linguistic categories, such as phonemes, words, and narrative meaning. This involves being able to encode acoustic features that happen at very fast timescales as well as information that builds up over long periods of time.  How this process is functionally organized into cortical circuits is not well understood.  Liberty Hamilton and colleagues showed that, by applying unsupervised methods to neural recordings of people listening to naturally spoken sentences, they could uncover an organization of the auditory cortex and surrounding areas into two spatially and functionally distinct modules: a posterior area that detects fast onsets important for segmenting the beginning of sentences and phrases, and a slower anterior area that responds in a sustained manner throughout sentences. The Hamilton lab is now applying similar unsupervised methods to look at changes in functional organization during brain development in children with epilepsy.  They also apply computational models to analyze which particular sound features are represented in the brain, and how areas functionally interact during natural speech perception and production.

Cortical spike multiplexing using gamma frequency latencies

The Poisson statistics of cortical action potentials has been seen as a basic model of signal representation and proposed as a method of communicating Bayesian statistics. However these views are increasingly difficult to integrate with spike timing signals in the gamma frequency spectrum. Dana Ballard and Ruahan Zhang showed in simulation that the two sets of observations can be reconciled if gamma frequency action potentials can be seen as a general purpose method of modulating fast communication in cortical networks that use phase delays as the communicated signal. Such a representation allows faster computation and much more compact representations than traditional Poisson spiking models. Poisson spike distributions can be understood as a correlate of the more basic gamma phase coding model that can mix several independent computations.

Timing and temporal coding

How do brain systems tell time and implement temporal coding when the timing of outputs or the timing of learned responses is an essential aspect of neural computation?  The Mauk lab uses eyelid conditioning to study cerebellar mechanisms of timing and temporal coding.  These studies use large-scale computer simulations (millions of neurons) to investigate the properties of stimulus-temporal coding and how neural systems implement temporal codes.

Publications

2017_1001031

Enzyme-free nucleic acid dynamical systems

N Srinivas, J Parkin, G Seelig, E Winfree, D Soloveichik bioRxiv 138420
Download PDF

Chemistries exhibiting complex dynamics – from inorganic oscillators to gene regulatory networks – have been long known but cannot be reprogrammed at will because of a lack of control over their evolved or serendipitously found molecular building blocks. Here we show that information-rich DNA strand displacement cascades could be systematically constructed to realize complex temporal trajectories specified by an abstract chemical reaction network model. We codify critical design principles in a compiler that automates the design process, and demonstrate our approach by building a novel DNA-only oscillator. Unlike biological networks that rely on the sophisticated chemistry underlying the central dogma, our test tube realization suggests that simple Watson-Crick base pairing interactions alone suffice for arbitrarily complex dynamics. Our result establishes a basis for autonomous and programmable molecular systems that interact with and control their chemical environment.

2017-06-02_1001261

Constrained sampling experiments reveal principles of detection in natural scenes

Stephen Sebastiana, Jared Abramsa, and Wilson S. Geisler PNAS vol. 114 no. 28
Download PDF

A fundamental everyday visual task is to detect target objects within a background scene. Using relatively simple stimuli, vision science has identified several major factors that affect detection thresholds, including the luminance of the background, the contrast of the background, the spatial similarity of the background to the target, and uncertainty due to random variations in the properties of the background and in the amplitude of the target. Here we use an experimental approach based on constrained sampling from multidimensional histograms of natural stimuli, together with a theoretical analysis based on signal detection theory, to discover how these factors affect detection in natural scenes. We sorted a large collection of natural image backgrounds into multidimensional histograms, where each bin corresponds to a particular luminance, contrast, and similarity. Detection thresholds were measured for a subset of bins spanning the space, where a natural background was randomly sampled from a bin on each trial. In low-uncertainty conditions, both the background bin and the amplitude of the target were fixed, and, in high-uncertainty conditions, they varied randomly on each trial. We found that thresholds increase approximately linearly along all three dimensions and that detection accuracy is unaffected by background bin and target amplitude uncertainty. The results are predicted from first principles by a normalized matched-template detector, where the dynamic normalizing gain factor follows directly from the statistical properties of the natural backgrounds. The results provide an explanation for classic laws of psychophysics and their underlying neural mechanisms.

2017-04-21_1001026

Dissociation of choice formation and choice-correlated activity in macaque visual cortex

RLT Goris, CM Ziemba, GM Stine, EP Simoncelli, JA Movshon Journal of Neuroscience 37 (20), 5195-5203

Responses of individual task-relevant sensory neurons can predict monkeys’ trial-by-trial choices in perceptual decision-making tasks. Choice-correlated activity has been interpreted as evidence that the responses of these neurons are causally linked to perceptual judgments. To further test this hypothesis, we studied responses of orientation-selective neurons in V1 and V2 while two macaque monkeys performed a fine orientation discrimination task. Although both animals exhibited a high level of neuronal and behavioral sensitivity, only one exhibited choice-correlated activity. Surprisingly, this correlation was negative: when a neuron fired more vigorously, the animal was less likely to choose the orientation preferred by that neuron. Moreover, choice-correlated activity emerged late in the trial, earlier in V2 than in V1, and was correlated with anticipatory signals. Together, these results suggest that choice-correlated activity in task-relevant sensory neurons can reflect postdecision modulatory signals.

2017-02-27_1001317

A systems-based analysis of dendritic nonlinearities reveals temporal feature extraction in mouse L5 cortical neurons

Kalmbach BE, Gray R, Johnston D, and Cook EPA Journal of Neurophysiology 117: 2188–2208
Download PDF

What do dendritic nonlinearities tell a neuron about signals injected into the dendrite? Linear and nonlinear dendritic components affect how time-varying inputs are transformed into action potentials (APs), but the relative contribution of each component is unclear. We developed a novel systems-identification approach to isolate the nonlinear response of layer 5 pyramidal neuron dendrites in mouse prefrontal cortex in response to dendritic current injections. We then quantified the nonlinear component and its effect on the soma, using functional models composed of linear filters and static nonlinearities. Both noise and waveform current injections revealed linear and nonlinear components in the dendritic response. The nonlinear component consisted of fast Na+ spikes that varied in amplitude 10-fold in a single neuron. A functional model reproduced the timing and amplitude of the dendritic spikes and revealed that they were selective to a preferred input dynamic (~4.5 ms rise time). The selectivity of the dendritic spikes became wider in the presence of additive noise, which was also predicted by the functional model. A second functional model revealed that the dendritic spikes were weakly boosted before being linearly integrated at the soma. For both our noise and waveform dendritic input, somatic APs were dependent on the somatic integration of the stimulus, followed a subset of large dendritic spikes, and were selective to the same input dynamics preferred by the dendrites. Our results suggest that the amplitude of fast dendritic spikes conveys information about high-frequency features in the dendritic input, which is then combined with low-frequency somatic integration.NEW & NOTEWORTHY The nonlinear response of layer 5 mouse pyramidal dendrites was isolated with a novel systems-based approach. In response to dendritic current injections, the nonlinear component contained mostly fast, variable-amplitude, Na+ spikes. A functional model accounted for the timing and amplitude of the dendritic spikes and revealed that dendritic spikes are selective to a preferred input dynamic, which was verified experimentally. Thus, fast dendritic nonlinearities behave as high-frequency feature detectors that influence somatic action potentials.

2017-02-23_1000808

Computational principles of memory

Rishidev Chaudhuri and Ila Fiete Nature Neuroscience 19, 394–403
Download PDF

The ability to store and later use information is essential for a variety of adaptive behaviors, including integration, learning, generalization, prediction and inference. In this Review, we survey theoretical principles that can allow the brain to construct persistent states for memory. We identify requirements that a memory system must satisfy and analyze existing models and hypothesized biological substrates in light of these requirements. We also highlight open questions, theoretical puzzles and problems shared with computer science and information theory.

2016-12-19_1001241

Mitochondrial support of persistent presynaptic vesicle mobilization with age-dependent synaptic growth after LTP

Smith HL, Bourne JN, Cao G, Chirillo MA, Ostroff LE, Watson DJ, and Harris KM eLife 5:e15275
Download PDF
Mitochondria support synaptic transmission through production of ATP, sequestration of calcium, synthesis of glutamate, and other vital functions. Surprisingly, less than 50% of hippocampal CA1 presynaptic boutons contain mitochondria, raising the question of whether synapses without mitochondria can sustain changes in efficacy. To address this question, we analyzed synapses from postnatal day 15 (P15) and adult rat hippocampus that had undergone theta-burst stimulation to produce long-term potentiation (TBS-LTP) and compared them to control or no stimulation. At 30 and 120 min after TBS-LTP, vesicles were decreased only in presynaptic boutons that contained mitochondria at P15, and vesicle decrement was greatest in adult boutons containing mitochondria. Presynaptic mitochondrial cristae were widened, suggesting a sustained energy demand. Thus, mitochondrial proximity reflected enhanced vesicle mobilization well after potentiation reached asymptote, in parallel with the apparently silent addition of new dendritic spines at P15 or the silent enlargement of synapses in adults.
2016-10-01_1001252

Mechanisms of Orientation Selectivity in the Primary Visual Cortex

Nicholas Priebe Annual Review of Vision Science Vol. 2:85-107
Download PDF

The mechanisms underlying the emergence of orientation selectivity in the visual cortex have been, and continue to be, the subjects of intense scrutiny. Orientation selectivity reflects a dramatic change in the representation of the visual world: Whereas afferent thalamic neurons are generally orientation insensitive, neurons in the primary visual cortex (V1) are extremely sensitive to stimulus orientation. This profound change in the receptive field structure along the visual pathway has positioned V1 as a model system for studying the circuitry that underlies neural computations across the neocortex. The neocortex is characterized anatomically by the relative uniformity of its circuitry despite its role in processing distinct signals from region to region. A combination of physiological, anatomical, and theoretical studies has shed some light on the circuitry components necessary for generating orientation selectivity in V1. This targeted effort has led to critical insights, as well as controversies, concerning how neural circuits in the neocortex perform computations.

2016-07-21_1001267

Calcium imaging with genetically encoded indicators in behaving primates

Seidemann E, Chen Y, Bai Y, Chen SC, Mehta P, Kajs BL, Geisler WS, Zemelman B V eLife 5:e16178
Download PDF

Understanding the neural basis of behaviour requires studying brain activity in behaving subjects using complementary techniques that measure neural responses at multiple spatial scales, and developing computational tools for understanding the mapping between these measurements. Here we report the first results of widefield imaging of genetically encoded calcium indicator (GCaMP6f) signals from V1 of behaving macaques. This technique provides a robust readout of visual population responses at the columnar scale over multiple mm2 and over several months. To determine the quantitative relation between the widefield GCaMP signals and the locally pooled spiking activity, we developed a computational model that sums the responses of V1 neurons characterized by prior single unit measurements. The measured tuning properties of the GCaMP signals to stimulus contrast, orientation and spatial position closely match the predictions of the model, suggesting that widefield GCaMP signals are linearly related to the summed local spiking activity.

2016-07-14_1001254

Dissociated functional significance of decision-related activity in the primate dorsal stream

Katz LN, Yates JL, Pillow JW, Huk AC Nature 535, 285–288
Download PDF

During decision making, neurons in multiple brain regions exhibit responses that are correlated with decisions. However, it remains uncertain whether or not various forms of decision-related activity are causally related to decision making. Here we address this question by recording and reversibly inactivating the lateral intraparietal (LIP) and middle temporal (MT) areas of rhesus macaques performing a motion direction discrimination task. Neurons in area LIP exhibited firing rate patterns that directly resembled the evidence accumulation process posited to govern decision making, with strong correlations between their response fluctuations and the animal’s choices. Neurons in area MT, in contrast, exhibited weak correlations between their response fluctuations and choices, and had firing rate patterns consistent with their sensory role in motion encoding. The behavioural impact of pharmacological inactivation of each area was inversely related to their degree of decision-related activity: while inactivation of neurons in MT profoundly impaired psychophysical performance, inactivation in LIP had no measurable impact on decision-making performance, despite having silenced the very clusters that exhibited strong decision-related activity. Although LIP inactivation did not impair psychophysical behaviour, it did influence spatial selection and oculomotor metrics in a free-choice control task. The absence of an effect on perceptual decision making was stable over trials and sessions and was robust to changes in stimulus type and task geometry, arguing against several forms of compensation. Thus, decision-related signals in LIP do not appear to be critical for computing perceptual decisions, and may instead reflect secondary processes. Our findings highlight a dissociation between decision correlation and causation, showing that strong neuron-decision correlations do not necessarily offer direct access to the neural computations underlying decisions.

2016-04-27_1001260

Natural speech reveals the semantic maps that tile human cerebral cortex

Alexander G. Huth, Wendy A. de Heer, Thomas L. Griffiths, Frédéric E. Theunissen, and Jack L. Gallant Nature 532, 453–458

The meaning of language is represented in regions of the cerebral cortex collectively known as the ‘semantic system’. However, little of the semantic system has been mapped comprehensively, and the semantic selectivity of most regions is unknown. Here we systematically map semantic selectivity across the cortex using voxel-wise modelling of functional MRI (fMRI) data collected while subjects listened to hours of narrative stories. We show that the semantic system is organized into intricate patterns that seem to be consistent across individuals. We then use a novel generative model to create a detailed semantic atlas. Our results suggest that most areas within the semantic system represent information about specific semantic domains, or groups of related concepts, and our atlas shows which domains are represented in each area. This study demonstrates that data-driven methods—commonplace in studies of human neuroanatomy and functional connectivity—provide a powerful and efficient means for mapping functional representations in the brain.

2016-03-10_1001304

Rhythms of the Hippocampal Network

Laura Lee Colgin Nature Reviews Neuroscience 17(4): 239–249
Download PDF
The hippocampal local field potential (LFP) exhibits three major types of rhythms, theta, sharp wave-ripples and gamma. These rhythms are defined by their frequencies, have behavioral correlates in several species including rats and humans, and have been proposed to perform distinct functions in hippocampal memory processing. However, recent findings have challenged traditional views on these behavioral functions. Here I review our current understanding of the origins and mnemonic functions of hippocampal theta, sharp-wave ripples and gamma rhythms based on findings from rodent studies, and present an updated, synthesized view of their roles and interactions within the hippocampal network.
2015-11-30_1001243

Nanoconnectomic upper bound on the variability of synaptic plasticity

Bartol TM, Bromer C, Kinney JP, Chirillo MA, Bourne JN, Harris KM, Sejnowski TJ eLife 4:e10778
Download PDF

Information in a computer is quantified by the number of bits that can be stored and recovered. An important question about the brain is how much information can be stored at a synapse through synaptic plasticity, which depends on the history of probabilistic synaptic activity. The strong correlation between size and efficacy of a synapse allowed us to estimate the variability of synaptic plasticity. In an EM reconstruction of hippocampal neuropil we found single axons making two or more synaptic contacts onto the same dendrites, having shared histories of presynaptic and postsynaptic activity. The spine heads and neck diameters, but not neck lengths, of these pairs were nearly identical in size. We found that there is a minimum of 26 distinguishable synaptic strengths, corresponding to storing 4.7 bits of information at each synapse. Because of stochastic variability of synaptic activation the observed precision requires averaging activity over several minutes.

2015-08-04_1001263

Optimal speed estimation in natural image movies predicts human performance

Burge, J and Geisler, WS Nature Communications 6:7900
Download PDF
Accurate perception of motion depends critically on accurate estimation of retinal motion speed. Here we first analyse natural image movies to determine the optimal space-time receptive fields (RFs) for encoding local motion speed in a particular direction, given the constraints of the early visual system. Next, from the RF responses to natural stimuli, we determine the neural computations that are optimal for combining and decoding the responses into estimates of speed. The computations show how selective, invariant speed-tuned units might be constructed by the nervous system. Then, in a psychophysical experiment using matched stimuli, we show that human performance is nearly optimal. Indeed, a single efficiency parameter accurately predicts the detailed shapes of a large set of human psychometric functions. We conclude that many properties of speed-selective neurons and human speed discrimination performance are predicted by the optimal computations, and that natural stimulus variation affects optimal and human observers almost identically.
2015-07-10_1001256

Single-trial spike trains in parietal cortex reveal discrete steps during decision-making

Latimer KW, Yates JL, Meister MLR, Huk AC, and Pillow, JW Science Vol. 349, Issue 6244, pp. 184-187
Download PDF

Neurons in the macaque lateral intraparietal (LIP) area exhibit firing rates that appear to ramp upward or downward during decision-making. These ramps are commonly assumed to reflect the gradual accumulation of evidence toward a decision threshold. However, the ramping in trial-averaged responses could instead arise from instantaneous jumps at different times on different trials. We examined single-trial responses in LIP using statistical methods for fitting and comparing latent dynamical spike-train models. We compared models with latent spike rates governed by either continuous diffusion-to-bound dynamics or discrete “stepping” dynamics. Roughly three-quarters of the choice-selective neurons we recorded were better described by the stepping model. Moreover, the inferred steps carried more information about the animal’s choice than spike counts.

2014-06-05_1001315

Subcircuit-specific neuromodulation in the prefrontal cortex

Nikolai Dembrow and Daniel Johnston Frontiers in Neural Circuits 8: 54 (2014)
Download PDF
During goal-directed behavior, the prefrontal cortex (PFC) exerts top-down control over numerous cortical and subcortical regions. PFC dysfunction has been linked to many disorders that involve deficits in cognitive performance, attention, motivation, and/or impulse control. A common theme among these disorders is that neuromodulation of the PFC is disrupted. Anatomically, the PFC is reciprocally connected with virtually all neuromodulatory centers. Recent studies of PFC neurons, both in vivo and ex vivo, have found that subpopulations of prefrontal projection neurons can be segregated into distinct subcircuits based on their long-range projection targets. These subpopulations differ in their connectivity, intrinsic properties, and responses to neuromodulators. In this review we outline the evidence for subcircuit-specific neuromodulation in the PFC, and describe some of the functional consequences of selective neuromodulation on behavioral states during goal-directed behavior.
2014-05-08_1001248

Sensory stimulation shifts visual cortex from synchronous to asynchronous states

Andrew Y. Y. Tan, Yuzhi Chen, Benjamin Scholl, Eyal Seidemann, and Nicholas J. Priebe Nature 509, 226–229

In the mammalian cerebral cortex, neural responses are highly variable during spontaneous activity and sensory stimulation. To explain this variability, the cortex of alert animals has been proposed to be in an asynchronous high-conductance state in which irregular spiking arises from the convergence of large numbers of uncorrelated excitatory and inhibitory inputs onto individual neurons. Signatures of this state are that a neuron’s membrane potential (Vm) hovers just below spike threshold, and its aggregate synaptic input is nearly Gaussian, arising from many uncorrelated inputs. Alternatively, irregular spiking could arise from infrequent correlated input events that elicit large fluctuations in Vm. To distinguish between these hypotheses, we developed a technique to perform whole-cell Vm measurements from the cortex of behaving monkeys, focusing on primary visual cortex (V1) of monkeys performing a visual fixation task. Here we show that, contrary to the predictions of an asynchronous state, mean Vm during fixation was far from threshold (14mV) and spiking was triggered by occasional large spontaneous fluctuations. Distributions of Vm values were skewed beyond that expected for a range of Gaussian input, but were consistent with synaptic input arising from infrequent correlated events. Furthermore, spontaneous fluctuations in Vm were correlated with the surrounding network activity, as reflected in simultaneously recorded nearby local field potential. Visual stimulation, however, led to responses more consistent with an asynchronous state: mean Vm approached threshold, fluctuations became more Gaussian, and correlations between single neurons and the surrounding network were disrupted. These observations show that sensory drive can shift a common cortical circuitry from a synchronous to an asynchronous state.

2014-05-07_1001306

Slow and fast γ rhythms coordinate different spatial coding modes in hippocampal place cells

Bieri KW, Bobbitt KN, Colgin LL Neuron 82(3): 670-681
Download PDF

Previous work has hinted that prospective and retrospective coding modes exist in hippocampus. Prospective coding is believed to reflect memory retrieval processes, whereas retrospective coding is thought to be important for memory encoding. Here, we show in rats that separate prospective and retrospective modes exist in hippocampal subfield CA1 and that slow and fast gamma rhythms differentially coordinate place cells during the two modes. Slow gamma power and phase locking of spikes increased during prospective coding; fast gamma power and phase locking increased during retrospective coding. Additionally, slow gamma spikes occurred earlier in place fields than fast gamma spikes, and cell ensembles retrieved upcoming positions during slow gamma and encoded past positions during fast gamma. These results imply that alternating slow and fast gamma states allow the hippocampus to switch between prospective and retrospective modes, possibly to prevent interference between memory retrieval and encoding.

2014-04-08_1000825

A Transition to Sharp Timing in Stochastic Leaky Integrate-and-Fire Neurons Driven by Frozen Noisy Input

Thibaud Taillefumier and Marcelo Magnasco Neural Computation 26 (5), 819-859
Download PDF

The firing activity of intracellularly stimulated neurons in cortical slices has been demonstrated to be profoundly affected by the temporal structure of the injected current. This suggests that the timing features of the neural response may be controlled as much by its own biophysical characteristics as by how a neuron is wired within a circuit. Modeling studies have shown that the interplay between internal noise and the fluctuations of the driving input controls the reliability and the precision of neuronal spiking. In order to investigate this interplay, we focus on the stochastic leaky integrate-and-fire neuron and identify the Hölder exponent H of the integrated input as the key mathematical property dictating the regime of firing of a single-unit neuron. We have recently provided numerical evidence for the existence of a phase transition when H becomes less than the statistical Hölder exponent associated with internal gaussian white noise (H=1/2). Here we describe the theoretical and numerical framework devised for the study of a neuron that is periodically driven by frozen noisy inputs with exponent H>0. In doing so, we account for the existence of a transition between two regimes of firing when H=1/2, and we show that spiking times have a continuous density when the Hölder exponent satisfies H>1/2. The transition at H=1/2 formally separates rate codes, for which the neural firing probability varies smoothly, from temporal codes, for which the neuron fires at sharply defined times regardless of the intensity of internal noise.

2013-03-27_1000826

A phase transition in the first passage of a Brownian process through a fluctuating boundary with implications for neural coding

Thibaud Taillefumier and Marcelo Magnasco PNAS 110 (16), E1438-1444
Download PDF

Finding the first time a fluctuating quantity reaches a given boundary is a deceptively simple-looking problem of vast practical importance in physics, biology, chemistry, neuroscience, economics, and industrial engineering. Problems in which the bound to be traversed is itself a fluctuating function of time include widely studied problems in neural coding, such as neuronal integrators with irregular inputs and internal noise. We show that the probability p(t) that a Gauss–Markov process will first exceed the boundary at time t suffers a phase transition as a function of the roughness of the boundary, as measured by its Hölder exponent H. The critical value occurs when the roughness of the boundary equals the roughness of the process, so for diffusive processes the critical value is Hc = 1/2. For smoother boundaries, H > 1/2, the probability density is a continuous function of time. For rougher boundaries, H < 1/2, the probability is concentrated on a Cantor-like set of zero measure: the probability density becomes divergent, almost everywhere either zero or infinity. The critical point Hc = 1/2 corresponds to a widely studied case in the theory of neural coding, in which the external input integrated by a model neuron is a white-noise process, as in the case of uncorrelated but precisely balanced excitatory and inhibitory inputs. We argue that this transition corresponds to a sharp boundary between rate codes, in which the neural firing probability varies smoothly, and temporal codes, in which the neuron fires at sharply defined times regardless of the intensity of internal noise.

2000-01-01_1001354 2000-01-01_1001352 2000-01-01_1001349 2000-01-01_1001258

Functional dissection of signal and noise in MT and LIP during decision-making

Yates JL, Park IM, Katz LN, Pillow JP, and Huk, AC to appear in Nature Neuroscience