Using discrete recurrent neural networks to learn structure in recordings of large ensembles of nervous activity

Christopher Hillar (September 15, 2016)

Please install the Flash Plugin


Discrete recurrent neural networks (DRNNs) were born in 1943 with the publication of the now seminal work by McCulloch and Pitts: "A logical calculus of the ideas immanent in nervous activity". Although the concepts led to major applications (digital circuit design, finite automata, computational theories of mind, Hopfield networks), experimental neuroscience has yet to benefit significantly. Here, we describe a novel, scalable use of DRNNs for the unsupervised discovery of structure in high-dimensional recordings of nervous tissue. We also present two case-studies in detail using the technology: (1) clustering of reoccurring spatiotemporal patterns in spike trains, and (2) denoising microscopy recordings of slices of neural activity. We also explain how to perform these analyses on standard hardware using our open-source Python package HDNET, which provides efficient DRNN tools for experimental neuroscientists. (Joint work with F. Effenberger).