Options
Sinz, Fabian H.
Loading...
Preferred name
Sinz, Fabian H.
Official Name
Sinz, Fabian H.
Alternative Name
Sinz, Fabian
Sinz, F. H.
Sinz, F.
Main Affiliation
Now showing 1 - 10 of 13
2019Journal Article [["dc.bibliographiccitation.firstpage","2060"],["dc.bibliographiccitation.issue","12"],["dc.bibliographiccitation.journal","Nature Neuroscience"],["dc.bibliographiccitation.lastpage","2065"],["dc.bibliographiccitation.volume","22"],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Muhammad, Taliah"],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Fahey, Paul G."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Pitkow, Xaq"],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2020-03-18T10:48:38Z"],["dc.date.available","2020-03-18T10:48:38Z"],["dc.date.issued","2019"],["dc.description.abstract","Finding sensory stimuli that drive neurons optimally is central to understanding information processing in the brain. However, optimizing sensory input is difficult due to the predominantly nonlinear nature of sensory processing and high dimensionality of the input. We developed 'inception loops', a closed-loop experimental paradigm combining in vivo recordings from thousands of neurons with in silico nonlinear response modeling. Our end-to-end trained, deep-learning-based model predicted thousands of neuronal responses to arbitrary, new natural input with high accuracy and was used to synthesize optimal stimuli-most exciting inputs (MEIs). For mouse primary visual cortex (V1), MEIs exhibited complex spatial features that occurred frequently in natural scenes but deviated strikingly from the common notion that Gabor-like stimuli are optimal for V1. When presented back to the same neurons in vivo, MEIs drove responses significantly better than control stimuli. Inception loops represent a widely applicable technique for dissecting the neural mechanisms of sensation."],["dc.identifier.doi","10.1038/s41593-019-0517-x"],["dc.identifier.pmid","31686023"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63343"],["dc.language.iso","en"],["dc.relation.eissn","1546-1726"],["dc.relation.issn","1097-6256"],["dc.relation.issn","1546-1726"],["dc.title","Inception loops discover what excites neurons most using deep predictive models"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI PMID PMC2015Journal Article [["dc.bibliographiccitation.artnumber","aac9462"],["dc.bibliographiccitation.issue","6264"],["dc.bibliographiccitation.journal","Science"],["dc.bibliographiccitation.volume","350"],["dc.contributor.author","Jiang, Xiaolong"],["dc.contributor.author","Shen, Shan"],["dc.contributor.author","Cadwell, Cathryn R."],["dc.contributor.author","Berens, Philipp"],["dc.contributor.author","Sinz, Fabian"],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Patel, Saumil"],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2020-04-01T11:50:43Z"],["dc.date.available","2020-04-01T11:50:43Z"],["dc.date.issued","2015"],["dc.description.abstract","Since the work of Ramón y Cajal in the late 19th and early 20th centuries, neuroscientists have speculated that a complete understanding of neuronal cell types and their connections is key to explaining complex brain functions. However, a complete census of the constituent cell types and their wiring diagram in mature neocortex remains elusive. By combining octuple whole-cell recordings with an optimized avidin-biotin-peroxidase staining technique, we carried out a morphological and electrophysiological census of neuronal types in layers 1, 2/3, and 5 of mature neocortex and mapped the connectivity between more than 11,000 pairs of identified neurons. We categorized 15 types of interneurons, and each exhibited a characteristic pattern of connectivity with other interneuron types and pyramidal cells. The essential connectivity structure of the neocortical microcircuit could be captured by only a few connectivity motifs."],["dc.identifier.doi","10.1126/science.aac9462"],["dc.identifier.pmid","26612957"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63434"],["dc.language.iso","en"],["dc.relation.eissn","1095-9203"],["dc.relation.issn","0036-8075"],["dc.title","Principles of connectivity among morphologically defined cell types in adult neocortex"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI PMID PMC2019Preprint [["dc.contributor.author","Fahey, Paul G."],["dc.contributor.author","Muhammad, Taliah"],["dc.contributor.author","Smith, Cameron"],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Fu, Jiakun"],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Yatsenko, Dimitri"],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2021-04-28T06:41:08Z"],["dc.date.available","2021-04-28T06:41:08Z"],["dc.date.issued","2019"],["dc.identifier.doi","10.1101/745323"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/84523"],["dc.title","A global map of orientation tuning in mouse visual cortex"],["dc.type","preprint"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI2014Journal Article [["dc.bibliographiccitation.firstpage","851"],["dc.bibliographiccitation.issue","6"],["dc.bibliographiccitation.journal","Nature Neuroscience"],["dc.bibliographiccitation.lastpage","857"],["dc.bibliographiccitation.volume","17"],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Berens, Philipp"],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Cotton, R. James"],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Yatsenko, Dimitri"],["dc.contributor.author","Saggau, Peter"],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2020-03-18T13:23:31Z"],["dc.date.available","2020-03-18T13:23:31Z"],["dc.date.issued","2014"],["dc.description.abstract","Neural codes are believed to have adapted to the statistical properties of the natural environment. However, the principles that govern the organization of ensemble activity in the visual cortex during natural visual input are unknown. We recorded populations of up to 500 neurons in the mouse primary visual cortex and characterized the structure of their activity, comparing responses to natural movies with those to control stimuli. We found that higher order correlations in natural scenes induced a sparser code, in which information is encoded by reliable activation of a smaller set of neurons and can be read out more easily. This computationally advantageous encoding for natural scenes was state-dependent and apparent only in anesthetized and active awake animals, but not during quiet wakefulness. Our results argue for a functional benefit of sparsification that could be a general principle governing the structure of the population activity throughout cortical microcircuits."],["dc.identifier.doi","10.1038/nn.3707"],["dc.identifier.pmid","24747577"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63352"],["dc.language.iso","en"],["dc.relation.eissn","1546-1726"],["dc.relation.issn","1097-6256"],["dc.relation.issn","1546-1726"],["dc.title","Population code in mouse V1 facilitates readout of natural scenes through increased sparseness"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI PMID PMC2015Preprint [["dc.contributor.author","Yatsenko, Dimitri"],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Sinz, Fabian"],["dc.contributor.author","Berens, Philipp"],["dc.contributor.author","Hoenselaar, Andreas"],["dc.contributor.author","Cotton, Ronald James"],["dc.contributor.author","Siapas, Athanassios S."],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2020-04-01T11:48:43Z"],["dc.date.available","2020-04-01T11:48:43Z"],["dc.date.issued","2015"],["dc.description.abstract","The rise of big data in modern research poses serious challenges for data management: Large and intricate datasets from diverse instrumentation must be precisely aligned, annotated, and processed in a variety of ways to extract new insights. While high levels of data integrity are expected, research teams have diverse backgrounds, are geographically dispersed, and rarely possess a primary interest in data science. Here we describe DataJoint, an open-source toolbox designed for manipulating and processing scientific data under the relational data model. Designed for scientists who need a flexible and expressive database language with few basic concepts and operations, DataJoint facilitates multiuser access, efficient queries, and distributed computing. With implementations in both MATLAB and Python, DataJoint is not limited to particular file formats, acquisition systems, or data modalities and can be quickly adapted to new experimental designs. DataJoint and related resources are available at http://datajoint.github.com."],["dc.identifier.doi","10.1101/031658"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63433"],["dc.language.iso","en"],["dc.title","DataJoint: managing big scientific data using MATLAB or Python"],["dc.type","preprint"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI2018Preprint [["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Fahey, Paul G."],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-03-18T14:17:12Z"],["dc.date.available","2020-03-18T14:17:12Z"],["dc.date.issued","2018"],["dc.description.abstract","Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that models based on convolutional neural networks (CNNs) lead to much more accurate predictions, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this model to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network not only outperforms a regular CNN with the same number of feature maps, but also reveals a number of common features shared by many V1 neurons, which deviate from the typical textbook idea of V1 as a bank of Gabor filters. Our findings are a first step towards a powerful new tool to study the nonlinear computations in V1."],["dc.identifier.arxiv","1809.10504v1"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63364"],["dc.title","A rotation-equivariant convolutional neural network model of primary visual cortex"],["dc.type","preprint"],["dc.type.internalPublication","unknown"],["dspace.entity.type","Publication"]]Details2022Journal Article [["dc.bibliographiccitation.journal","Nature"],["dc.contributor.author","Franke, Katrin"],["dc.contributor.author","Willeke, Konstantin F."],["dc.contributor.author","Ponder, Kayla"],["dc.contributor.author","Galdamez, Mario"],["dc.contributor.author","Zhou, Na"],["dc.contributor.author","Muhammad, Taliah"],["dc.contributor.author","Patel, Saumil"],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2022-10-04T10:21:37Z"],["dc.date.available","2022-10-04T10:21:37Z"],["dc.date.issued","2022"],["dc.identifier.doi","10.1038/s41586-022-05270-3"],["dc.identifier.pii","5270"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/114458"],["dc.language.iso","en"],["dc.notes.intern","DOI-Import GROB-600"],["dc.relation.eissn","1476-4687"],["dc.relation.issn","0028-0836"],["dc.rights.uri","https://www.springer.com/tdm"],["dc.title","State-dependent pupil dilation rapidly shifts visual feature selectivity"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details DOI2018Preprint [["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Fahey, Paul G."],["dc.contributor.author","Muhammad, Taliah"],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Pitkow, Xaq"],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2020-03-18T10:53:45Z"],["dc.date.available","2020-03-18T10:53:45Z"],["dc.date.issued","2018"],["dc.description.abstract","Much of our knowledge about sensory processing in the brain is based on quasi-linear models and the stimuli that optimally drive them. However, sensory information processing is nonlinear, even in primary sensory areas, and optimizing sensory input is difficult due to the high-dimensional input space. We developed inception loops, a closed-loop experimental paradigm that combines in vivo recordings with in silico nonlinear response modeling to identify the Most Exciting Images (MEIs) for neurons in mouse V1. When presented back to the brain, MEIs indeed drove their target cells significantly better than the best stimuli identified by linear models. The MEIs exhibited complex spatial features that deviated from the textbook ideal of V1 as a bank of Gabor filters. Inception loops represent a widely applicable new approach to dissect the neural mechanisms of sensation."],["dc.identifier.doi","10.1101/506956"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63345"],["dc.language.iso","en"],["dc.title","Inception in visual cortex: in vivo-silico loops reveal most exciting images"],["dc.type","preprint"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI2019Conference Paper [["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Muhammad, Taliah"],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Reimer, Jake"],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2020-04-02T10:39:00Z"],["dc.date.available","2020-04-02T10:39:00Z"],["dc.date.issued","2019"],["dc.description.abstract","Recent work on modeling neural responses in the primate visual system has benefited from deep neural networks trained on large-scale object recognition, and found a hierarchical correspondence between layers of the artificial neural network and brain areas along the ventral visual stream. However, we neither know whether such task-optimized networks enable equally good models of the rodent visual system, nor if a similar hierarchical correspondence exists. Here, we address these questions in the mouse visual system by extracting features at several layers of a convolutional neural network (CNN) trained on ImageNet to predict the responses of thousands of neurons in four visual areas (V1, LM, AL, RL) to natural images. We found that the CNN features outperform classical subunit energy models, but found no evidence for an order of the areas we recorded via a correspondence to the hierarchy of CNN layers. Moreover, the same CNN but with random weights provided an equivalently useful feature space for predicting neural responses. Our results suggest that object recognition as a high-level task does not provide more discriminative features to characterize the mouse visual system than a random network. Unlike in the primate, training on ethologically relevant visually guided behaviors – beyond static object recognition – may be needed to unveil the functional organization of the mouse visual cortex."],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63515"],["dc.language.iso","en"],["dc.notes.preprint","yes"],["dc.relation.conference","33rd Conference on Neural Information Processing Systems (NeurIPS 2019)"],["dc.relation.eventend","2019-12-14"],["dc.relation.eventlocation","), Vancouver, Canada"],["dc.relation.eventstart","2019-12-08"],["dc.relation.iserratumof","yes"],["dc.title","How well do deep neural networks trained on object recognition characterize the mouse visual system?"],["dc.type","conference_paper"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details2018Preprint [["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Fahey, Paul G."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Yatsenko, Dimitri"],["dc.contributor.author","Pitkow, Xaq"],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2020-03-18T13:11:17Z"],["dc.date.available","2020-03-18T13:11:17Z"],["dc.date.issued","2018"],["dc.description.abstract","To better understand the representations in visual cortex, we need to generate better predictions of neural activity in awake animals presented with their ecological input: natural video. Despite recent advances in models for static images, models for predicting responses to natural video are scarce and standard linear-nonlinear models perform poorly. We developed a new deep recurrent network architecture that predicts inferred spiking activity of thousands of mouse V1 neurons simulta-neously recorded with two-photon microscopy, while accounting for confounding factors such as the animal’s gaze position and brain state changes related to running state and pupil dilation. Powerful system identification models provide an opportunity to gain insight into cortical functions through in silico experiments that can subsequently be tested in the brain. However, in many cases this approach requires that the model is able to generalize to stimulus statistics that it was not trained on, such as band-limited noise and other parameterized stimuli. We investigated these domain transfer properties in our model and find that our model trained on natural images is able to correctly predict the orientation tuning of neurons in responses to artificial noise stimuli. Finally, we show that we can fully generalize from movies to noise and maintain high predictive performance on both stimulus domains by fine-tuning only the final layer’s weights on a network otherwise trained on natural movies. The converse, however, is not true."],["dc.identifier.doi","10.1101/452672"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63347"],["dc.language.iso","en"],["dc.title","Stimulus domain transfer in recurrent models for large scale cortical population prediction on video"],["dc.type","preprint"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI