Mathematical Neuroscience

Course content and aim

This course is intended for mathematicians interested in neuroscience and mathematically-inclined computational neuroscientists. The emphasis will be primarily on the analytical treatment of neuroscience-inspired models and algorithms. The aim of the course is to equip students with a solid technical and conceptual background to tackle research questions in mathematical neuroscience.

The course will be structured in three blocks:

Neural dynamics. Neural computations emerge from myriads of neuronal interactions occurring in intricate networks that have evolved over eons of time. Due to the obscuring complexity of these networks, we can only hope to uncover principles for neural computations through the lens of mathematical modeling and analysis. The main theoretical challenge is to relate quantitatively structure and activity in a tractable way, i.e. to uncover hierarchies of low-dimensional representations for the activity of high-dimensional neural systems. In this block, we will present attempts made in that direction while introducing the mathematical formalisms associated to classical models of neural dynamics.

Information theory. To elucidate brain structure conceptually, it is tempting to look for “design principles” that would guide the development and the evolution of neural systems. Such a putative design principle is offered by the “efficient coding hypothesis”, which states that sensory systems have evolved to optimally transmit information about the natural world given limitations on their biophysical components and constraints on energy use. In this block, we will introduce the theoretical framework suitable for investigating the efficient coding hypothesis from a mathematical standpoint.

Machine learning. Machine learning has allowed the realization of speech recognition, language translation, natural-object recognition, and self-driving cars.  These achievements, which rival human performance, are performed by neural networks that mimic many structural features of the brain and learn how to perform tasks via biologically inspired rules, such as reinforcement learning.  However, the mathematical theory underlying this computational feats is still in its infancy. This block will present the mathematical theory supporting a few machine learning methods in supervised learning, in reinforcement learning, and in unsupervised learning.

 

Time & Place

Tuesday/Thursday 09:30 a.m. 11:00 a.m. @ RLM 10.176.

Office hours

Monday 2:00p.m.-3:00p.m. and Wednesday 12:00p.m.-1:00p.m. @ RLM 10.148.

Syllabus

M394C_Syllabus

Course Schedule

Date Topics Resources Homework
01.22.2019  Course introduction Introduction
01.24.2019  Hodgkin-Huxley model/Reduced models Notes_HHSpike_HHAtypical_Spike, Threshold_Spike  Problem_Set1
01.29.2019  Introduction to bifurcation theory  NotesBifurcation
01.31.2019   Center manifold reduction  NotesCMTheorem
02.05.2019   No class
02.07.2019  Normal form theory  NotesNormalForm   Reading assigned in class
02.12.2019  Bifurcation in neuroscience  HudspethMagnasco, KopelErmentrout
02.14.2019  Intensity-based neural models  NotesIntensity  PillowPaninski
02.19.2019  Integrate-and-fire neural models  NotesIntegrateAndFire
02.21.2019  Thermodynamic mean-field limits  NotesTMF  Brunel
02.26.2019  Replica mean-field limits  NotesReplica  Baccelli
02.28.2019  Differential geometry  NotesDiffGeo
03.05.2019  Information geometry 1  NotesInfGeo1
03.07.2019  Information geometry 2  NotesInfGeo2  AmariBialek1
03.12.2019  Mutual information  NotesMI  Bialek2
03.14.2019  Rate-distortion theory  NotesMIOpt  Blahut
03.19.2019  Spring Break
03.21.2019  Spring Break
03.26.2019  Information bottleneck  NotesIB  TishbyPereiraBialek, SchwartzZivTishby, Chechik
03.28.2019  Variational Fisher information  NotesVar  Ganguli
04.02.2019  Perceptron algorithm  NotesPerceptron  ProjectRefs
04.04.2019  Linear separability and combinatorics  NotesLinSep  Cover
04.09.2019  Reproducing-kernel-Hilbert space  NotesRKHS  Aronszajn
04.11.2019  Support-vector machine  NotesSVM  Vapnik
04.16.2019  Markov decision process  NotesMDP
04.18.2019  Dynamic programming  NotesDP  Dijkstra
04.23.2019  Q-learning algorithm  NotesQL  RobbinsMonro, Dayan
04.25.2019  Reduction to linear Bellman equations  NotesLBE  Todorov
04.30.2019  Autoencoder networks  KingmaWelling, Radford, Bengio
05.02.2019  Generative adversarial networks
05.07.2019  Projects
05.09.2019  Projects