Now showing 1 - 10 of 10
  • 2013Journal Article Research Paper
    [["dc.bibliographiccitation.artnumber","e1002889"],["dc.bibliographiccitation.issue","1"],["dc.bibliographiccitation.journal","PLoS Computational Biology"],["dc.bibliographiccitation.volume","9"],["dc.contributor.author","Sinz, Fabian"],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2021-04-28T06:40:44Z"],["dc.date.available","2021-04-28T06:40:44Z"],["dc.date.issued","2013"],["dc.description.abstract","Divisive normalization in primary visual cortex has been linked to adaptation to natural image statistics in accordance to Barlow's redundancy reduction hypothesis. Using recent advances in natural image modeling, we show that the previously studied static model of divisive normalization is rather inefficient in reducing local contrast correlations, but that a simple temporal contrast adaptation mechanism of the half-saturation constant can substantially increase its efficiency. Our findings reveal the experimentally observed temporal dynamics of divisive normalization to be critical for redundancy reduction."],["dc.identifier.doi","10.1371/journal.pcbi.1002889"],["dc.identifier.pmid","23382664"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/84519"],["dc.language.iso","en"],["dc.relation.eissn","1553-7358"],["dc.title","Temporal adaptation enhances efficient contrast gain control on natural images"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.subtype","original_ja"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2013-11Journal Article Research Paper
    [["dc.bibliographiccitation.firstpage","2809"],["dc.bibliographiccitation.issue","11"],["dc.bibliographiccitation.journal","Neural Computation"],["dc.bibliographiccitation.lastpage","2814"],["dc.bibliographiccitation.volume","25"],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2021-04-28T06:40:39Z"],["dc.date.available","2021-04-28T06:40:39Z"],["dc.date.issued","2013-11"],["dc.description.abstract","Divisive normalization has been proposed as a nonlinear redundancy reduction mechanism capturing contrast correlations. Its basic function is a radial rescaling of the population response. Because of the saturation of divisive normalization, however, it is impossible to achieve a fully independent representation. In this letter, we derive an analytical upper bound on the inevitable residual redundancy of any saturating radial rescaling mechanism."],["dc.identifier.doi","10.1162/NECO_a_00505"],["dc.identifier.pmid","23895047"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/84518"],["dc.language.iso","en"],["dc.relation.eissn","1530-888X"],["dc.relation.issn","0899-7667"],["dc.title","What is the limit of redundancy reduction with divisive normalization?"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.subtype","original_ja"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2014Journal Article
    [["dc.bibliographiccitation.firstpage","851"],["dc.bibliographiccitation.issue","6"],["dc.bibliographiccitation.journal","Nature Neuroscience"],["dc.bibliographiccitation.lastpage","857"],["dc.bibliographiccitation.volume","17"],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Berens, Philipp"],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Cotton, R. James"],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Yatsenko, Dimitri"],["dc.contributor.author","Saggau, Peter"],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2020-03-18T13:23:31Z"],["dc.date.available","2020-03-18T13:23:31Z"],["dc.date.issued","2014"],["dc.description.abstract","Neural codes are believed to have adapted to the statistical properties of the natural environment. However, the principles that govern the organization of ensemble activity in the visual cortex during natural visual input are unknown. We recorded populations of up to 500 neurons in the mouse primary visual cortex and characterized the structure of their activity, comparing responses to natural movies with those to control stimuli. We found that higher order correlations in natural scenes induced a sparser code, in which information is encoded by reliable activation of a smaller set of neurons and can be read out more easily. This computationally advantageous encoding for natural scenes was state-dependent and apparent only in anesthetized and active awake animals, but not during quiet wakefulness. Our results argue for a functional benefit of sparsification that could be a general principle governing the structure of the population activity throughout cortical microcircuits."],["dc.identifier.doi","10.1038/nn.3707"],["dc.identifier.pmid","24747577"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63352"],["dc.language.iso","en"],["dc.relation.eissn","1546-1726"],["dc.relation.issn","1097-6256"],["dc.relation.issn","1546-1726"],["dc.title","Population code in mouse V1 facilitates readout of natural scenes through increased sparseness"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2018Preprint
    [["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Fahey, Paul G."],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-03-18T14:17:12Z"],["dc.date.available","2020-03-18T14:17:12Z"],["dc.date.issued","2018"],["dc.description.abstract","Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that models based on convolutional neural networks (CNNs) lead to much more accurate predictions, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this model to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network not only outperforms a regular CNN with the same number of feature maps, but also reveals a number of common features shared by many V1 neurons, which deviate from the typical textbook idea of V1 as a bank of Gabor filters. Our findings are a first step towards a powerful new tool to study the nonlinear computations in V1."],["dc.identifier.arxiv","1809.10504v1"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63364"],["dc.title","A rotation-equivariant convolutional neural network model of primary visual cortex"],["dc.type","preprint"],["dc.type.internalPublication","unknown"],["dspace.entity.type","Publication"]]
    Details
  • 2010-10-28Journal Article Research Paper
    [["dc.bibliographiccitation.firstpage","2213"],["dc.bibliographiccitation.issue","22"],["dc.bibliographiccitation.journal","Vision Research"],["dc.bibliographiccitation.lastpage","2222"],["dc.bibliographiccitation.volume","50"],["dc.contributor.author","Hosseini, Reshad"],["dc.contributor.author","Sinz, Fabian"],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2021-04-28T06:40:49Z"],["dc.date.available","2021-04-28T06:40:49Z"],["dc.date.issued","2010-10-28"],["dc.description.abstract","The light intensities of natural images exhibit a high degree of redundancy. Knowing the exact amount of their statistical dependencies is important for biological vision as well as compression and coding applications but estimating the total amount of redundancy, the multi-information, is intrinsically hard. The common approach is to estimate the multi-information for patches of increasing sizes and divide by the number of pixels. Here, we show that the limiting value of this sequence--the multi-information rate--can be better estimated by using another limiting process based on measuring the mutual information between a pixel and a causal neighborhood of increasing size around it. Although in principle this method has been known for decades, its superiority for estimating the multi-information rate of natural images has not been fully exploited yet. Either method provides a lower bound on the multi-information rate, but the mutual information based sequence converges much faster to the multi-information rate than the conventional method does. Using this fact, we provide improved estimates of the multi-information rate of natural images and a better understanding of its underlying spatial structure."],["dc.identifier.doi","10.1016/j.visres.2010.07.025"],["dc.identifier.pmid","20705084"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/84520"],["dc.language.iso","en"],["dc.relation.eissn","1878-5646"],["dc.relation.issn","0042-6989"],["dc.title","Lower bounds on the redundancy of natural images"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.subtype","original_ja"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2010Journal Article Research Paper
    [["dc.bibliographiccitation.journal","Frontiers in Computational Neuroscience"],["dc.bibliographiccitation.volume","4"],["dc.contributor.author","Theis, Lucas"],["dc.contributor.author","Sinz, Fabian"],["dc.contributor.author","Gerwinn, Sebastian"],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2021-04-28T06:41:18Z"],["dc.date.available","2021-04-28T06:41:18Z"],["dc.date.issued","2010"],["dc.identifier.doi","10.3389/conf.fncom.2010.51.00116"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/84525"],["dc.relation.issn","1662-5188"],["dc.title","Likelihood Estimation in Deep Belief Networks"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.subtype","original_ja"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2009Journal Article Research Paper
    [["dc.bibliographiccitation.firstpage","817"],["dc.bibliographiccitation.issue","5"],["dc.bibliographiccitation.journal","Journal of Multivariate Analysis"],["dc.bibliographiccitation.lastpage","820"],["dc.bibliographiccitation.volume","100"],["dc.contributor.author","Sinz, Fabian"],["dc.contributor.author","Gerwinn, Sebastian"],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2021-04-28T06:40:53Z"],["dc.date.available","2021-04-28T06:40:53Z"],["dc.date.issued","2009"],["dc.identifier.doi","10.1016/j.jmva.2008.07.006"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/84521"],["dc.relation.issn","0047-259X"],["dc.title","Characterization of the p-generalized normal distribution"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.subtype","original_ja"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2019Conference Paper
    [["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Muhammad, Taliah"],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Reimer, Jake"],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2020-04-02T10:39:00Z"],["dc.date.available","2020-04-02T10:39:00Z"],["dc.date.issued","2019"],["dc.description.abstract","Recent work on modeling neural responses in the primate visual system has benefited from deep neural networks trained on large-scale object recognition, and found a hierarchical correspondence between layers of the artificial neural network and brain areas along the ventral visual stream. However, we neither know whether such task-optimized networks enable equally good models of the rodent visual system, nor if a similar hierarchical correspondence exists. Here, we address these questions in the mouse visual system by extracting features at several layers of a convolutional neural network (CNN) trained on ImageNet to predict the responses of thousands of neurons in four visual areas (V1, LM, AL, RL) to natural images. We found that the CNN features outperform classical subunit energy models, but found no evidence for an order of the areas we recorded via a correspondence to the hierarchy of CNN layers. Moreover, the same CNN but with random weights provided an equivalently useful feature space for predicting neural responses. Our results suggest that object recognition as a high-level task does not provide more discriminative features to characterize the mouse visual system than a random network. Unlike in the primate, training on ethologically relevant visually guided behaviors – beyond static object recognition – may be needed to unveil the functional organization of the mouse visual cortex."],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63515"],["dc.language.iso","en"],["dc.notes.preprint","yes"],["dc.relation.conference","33rd Conference on Neural Information Processing Systems (NeurIPS 2019)"],["dc.relation.eventend","2019-12-14"],["dc.relation.eventlocation","), Vancouver, Canada"],["dc.relation.eventstart","2019-12-08"],["dc.relation.iserratumof","yes"],["dc.title","How well do deep neural networks trained on object recognition characterize the mouse visual system?"],["dc.type","conference_paper"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details
  • 2009-04Journal Article Research Paper
    [["dc.bibliographiccitation.artnumber","e1000336"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","PLoS Computational Biology"],["dc.bibliographiccitation.volume","5"],["dc.contributor.author","Eichhorn, Jan"],["dc.contributor.author","Sinz, Fabian"],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2021-04-28T06:41:00Z"],["dc.date.available","2021-04-28T06:41:00Z"],["dc.date.issued","2009-04"],["dc.description.abstract","Orientation selectivity is the most striking feature of simple cell coding in V1 that has been shown to emerge from the reduction of higher-order correlations in natural images in a large variety of statistical image models. The most parsimonious one among these models is linear Independent Component Analysis (ICA), whereas second-order decorrelation transformations such as Principal Component Analysis (PCA) do not yield oriented filters. Because of this finding, it has been suggested that the emergence of orientation selectivity may be explained by higher-order redundancy reduction. To assess the tenability of this hypothesis, it is an important empirical question how much more redundancy can be removed with ICA in comparison to PCA or other second-order decorrelation methods. Although some previous studies have concluded that the amount of higher-order correlation in natural images is generally insignificant, other studies reported an extra gain for ICA of more than 100%. A consistent conclusion about the role of higher-order correlations in natural images can be reached only by the development of reliable quantitative evaluation methods. Here, we present a very careful and comprehensive analysis using three evaluation criteria related to redundancy reduction: In addition to the multi-information and the average log-loss, we compute complete rate-distortion curves for ICA in comparison with PCA. Without exception, we find that the advantage of the ICA filters is small. At the same time, we show that a simple spherically symmetric distribution with only two parameters can fit the data significantly better than the probabilistic model underlying ICA. This finding suggests that, although the amount of higher-order correlation in natural images can in fact be significant, the feature of orientation selectivity does not yield a large contribution to redundancy reduction within the linear filter bank models of V1 simple cells."],["dc.identifier.arxiv","0810.2872v2"],["dc.identifier.doi","10.1371/journal.pcbi.1000336"],["dc.identifier.pmid","19343216"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/84522"],["dc.language.iso","en"],["dc.relation.eissn","1553-7358"],["dc.title","Natural image coding in V1: how much use is orientation selectivity?"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.subtype","original_ja"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2019Journal Article Research Paper
    [["dc.bibliographiccitation.firstpage","967"],["dc.bibliographiccitation.issue","6"],["dc.bibliographiccitation.journal","Neuron"],["dc.bibliographiccitation.lastpage","979"],["dc.bibliographiccitation.volume","103"],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Pitkow, Xaq"],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Tolias, Andreas S."],["dc.date.accessioned","2021-04-28T06:11:37Z"],["dc.date.available","2021-04-28T06:11:37Z"],["dc.date.issued","2019"],["dc.description.abstract","Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called \"inductive bias,\" determines how well any learning algorithm-or brain-generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. We highlight some shortcomings of state-of-the-art learning algorithms compared to biological brains and discuss several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture."],["dc.identifier.doi","10.1016/j.neuron.2019.08.034"],["dc.identifier.pmid","31557461"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/84515"],["dc.language.iso","en"],["dc.relation.eissn","1097-4199"],["dc.relation.issn","0896-6273"],["dc.title","Engineering a Less Artificial Intelligence"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.subtype","original_ja"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC