Options
Ecker, Alexander S.
Loading...
Preferred name
Ecker, Alexander S.
Official Name
Ecker, Alexander S.
Alternative Name
Ecker, A. S.
Ecker, Alexander
Ecker, A.
Main Affiliation
Email
ecker@cs.uni-goettingen.de
ORCID
Researcher ID
A-5184-2010
Now showing 1 - 7 of 7
2019Journal Article [["dc.bibliographiccitation.artnumber","e1006897"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","PLoS Computational Biology"],["dc.bibliographiccitation.volume","15"],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Denfield, George H."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2020-03-18T10:51:24Z"],["dc.date.available","2020-03-18T10:51:24Z"],["dc.date.issued","2019"],["dc.description.abstract","Despite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have emerged for modeling these nonlinear computations: transfer learning from artificial neural networks trained on object recognition and data-driven convolutional neural network models trained end-to-end on large populations of neurons. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. We found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals."],["dc.identifier.doi","10.1371/journal.pcbi.1006897"],["dc.identifier.pmid","31013278"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63344"],["dc.language.iso","en"],["dc.relation.eissn","1553-7358"],["dc.relation.issn","1553-7358"],["dc.title","Deep convolutional models improve predictions of macaque V1 responses to natural images"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI PMID PMC2019Conference Paper [["dc.contributor.author","Günthner, Max F."],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Denfield, George H."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2020-03-18T14:29:36Z"],["dc.date.available","2020-03-18T14:29:36Z"],["dc.date.issued","2019"],["dc.description.abstract","Divisive normalization (DN) has been suggested as a canonical computation implemented throughout the neocortex. In primary visual cortex (V1), DN was found to be crucial to explain nonlinear response properties of neurons when presented with superpositions of simple stimuli such as gratings. Based on such studies, it is currently assumed that neuronal responses to stimuli restricted to the neuron's classical receptive field (RF) are normalized by a non-specific pool of nearby neurons with similar RF locations. However, it is currently unknown how DN operates in V1 when processing natural inputs. Here, we investigated DN in monkey V1 under stimulation with natural images with an end-to-end trainable model that learns the pool of normalizing neurons and the magnitude of their contribution directly from the data. Taking advantage of our model's direct interpretable view of V1 computation, we found that oriented features were normalized preferentially by features with similar orientation preference rather than non-specifically. Our model's accuracy was competitive with state-of-the-art black-box models, suggesting that rectification, DN, and a combination of subunits resulting from DN are sufficient to account for V1 responses to localized stimuli. Thus, our work significantly advances our understanding of V1 function."],["dc.identifier.doi","10.32470/CCN.2019.1211-0"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63367"],["dc.language.iso","en"],["dc.notes.preprint","yes"],["dc.relation.conference","Conference on Cognitive Computational Neuroscience 2019"],["dc.relation.eventend","2019-09-16"],["dc.relation.eventlocation","Berlin, Germany"],["dc.relation.eventstart","2019-09-13"],["dc.relation.iserratumof","yes"],["dc.title","Learning Divisive Normalization in Primary Visual Cortex"],["dc.type","conference_paper"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI2018Preprint [["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Fahey, Paul G."],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Reimer, Jacob"],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-03-18T14:17:12Z"],["dc.date.available","2020-03-18T14:17:12Z"],["dc.date.issued","2018"],["dc.description.abstract","Classical models describe primary visual cortex (V1) as a filter bank of orientation-selective linear-nonlinear (LN) or energy models, but these models fail to predict neural responses to natural stimuli accurately. Recent work shows that models based on convolutional neural networks (CNNs) lead to much more accurate predictions, but it remains unclear which features are extracted by V1 neurons beyond orientation selectivity and phase invariance. Here we work towards systematically studying V1 computations by categorizing neurons into groups that perform similar computations. We present a framework to identify common features independent of individual neurons' orientation selectivity by using a rotation-equivariant convolutional neural network, which automatically extracts every feature at multiple different orientations. We fit this model to responses of a population of 6000 neurons to natural images recorded in mouse primary visual cortex using two-photon imaging. We show that our rotation-equivariant network not only outperforms a regular CNN with the same number of feature maps, but also reveals a number of common features shared by many V1 neurons, which deviate from the typical textbook idea of V1 as a bank of Gabor filters. Our findings are a first step towards a powerful new tool to study the nonlinear computations in V1."],["dc.identifier.arxiv","1809.10504v1"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63364"],["dc.title","A rotation-equivariant convolutional neural network model of primary visual cortex"],["dc.type","preprint"],["dc.type.internalPublication","unknown"],["dspace.entity.type","Publication"]]Details2018Book Chapter [["dc.bibliographiccitation.firstpage","225"],["dc.bibliographiccitation.lastpage","240"],["dc.bibliographiccitation.volume","11216"],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Weis, Marissa A."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Weiss, Y."],["dc.contributor.editor","Ferrari, V."],["dc.contributor.editor","Hebert, M."],["dc.contributor.editor","Sminchisescu, C."],["dc.date.accessioned","2020-03-18T14:36:05Z"],["dc.date.available","2020-03-18T14:36:05Z"],["dc.date.issued","2018"],["dc.description.abstract","Visualizing features in deep neural networks (DNNs) can help understanding their computations. Many previous studies aimed to visualize the selectivity of individual units by finding meaningful images that maximize their activation. However, comparably little attention has been paid to visualizing to what image transformations units in DNNs are invariant. Here we propose a method to discover invariances in the responses of hidden layer units of deep neural networks. Our approach is based on simultaneously searching for a batch of images that strongly activate a unit while at the same time being as distinct from each other as possible. We find that even early convolutional layers in VGG-19 exhibit various forms of response invariance: near-perfect phase invariance in some units and invariance to local diffeomorphic transformations in others. At the same time, we uncover representational differences with ResNet-50 in its corresponding layers. We conclude that invariance transformations are a major computational component learned by DNNs and we provide a systematic method to study them."],["dc.identifier.doi","10.1007/978-3-030-01258-8_14"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63368"],["dc.language.iso","en"],["dc.publisher","Springer"],["dc.publisher.place","Cham"],["dc.relation.crisseries","Lecture Notes in Computer Science"],["dc.relation.doi","10.1007/978-3-030-01258-8"],["dc.relation.eisbn","978-3-030-01258-8"],["dc.relation.eissn","1611-3349"],["dc.relation.isbn","978-3-030-01257-1"],["dc.relation.ispartof","Computer Vision – ECCV 2018"],["dc.relation.ispartofseries","Lecture Notes in Computer Science;"],["dc.relation.issn","0302-9743"],["dc.title","Diverse Feature Visualizations Reveal Invariances in Early Layers of Deep Neural Networks"],["dc.type","book_chapter"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI2020Preprint [["dc.contributor.author","Burg, Max F."],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Denfield, George H."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2020-03-18T13:47:52Z"],["dc.date.available","2020-03-18T13:47:52Z"],["dc.date.issued","2020"],["dc.description.abstract","Deep convolutional neural networks (CNNs) have emerged as the state of the art for predicting neural activity in visual cortex. While such models outperform classical linear-nonlinear and wavelet-based representations, we currently do not know what computations they approximate. Here, we tested divisive normalization (DN) for its ability to predict spiking responses to natural images. We developed a model that learns the pool of normalizing neurons and the magnitude of their contribution end-to-end from data. In macaque primary visual cortex (V1), we found that our interpretable model outperformed linear-nonlinear and wavelet-based feature representations and almost closed the gap to high-performing black-box models. Surprisingly, within the classical receptive field, oriented features were normalized preferentially by features with similar orientations rather than non-specifically as currently assumed. Our work provides a new, quantitatively interpretable and high-performing model of V1 applicable to arbitrary images, refining our view on gain control within the classical receptive field."],["dc.identifier.doi","10.1101/767285"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63360"],["dc.language.iso","en"],["dc.title","Learning Divisive Normalization in Primary Visual Cortex"],["dc.type","preprint"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI2019Conference Paper [["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Sinz, Fabian H."],["dc.contributor.author","Muhammad, Taliah"],["dc.contributor.author","Froudarakis, Emmanouil"],["dc.contributor.author","Cobos, Erick"],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Reimer, Jake"],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2020-04-02T10:39:00Z"],["dc.date.available","2020-04-02T10:39:00Z"],["dc.date.issued","2019"],["dc.description.abstract","Recent work on modeling neural responses in the primate visual system has benefited from deep neural networks trained on large-scale object recognition, and found a hierarchical correspondence between layers of the artificial neural network and brain areas along the ventral visual stream. However, we neither know whether such task-optimized networks enable equally good models of the rodent visual system, nor if a similar hierarchical correspondence exists. Here, we address these questions in the mouse visual system by extracting features at several layers of a convolutional neural network (CNN) trained on ImageNet to predict the responses of thousands of neurons in four visual areas (V1, LM, AL, RL) to natural images. We found that the CNN features outperform classical subunit energy models, but found no evidence for an order of the areas we recorded via a correspondence to the hierarchy of CNN layers. Moreover, the same CNN but with random weights provided an equivalently useful feature space for predicting neural responses. Our results suggest that object recognition as a high-level task does not provide more discriminative features to characterize the mouse visual system than a random network. Unlike in the primate, training on ethologically relevant visually guided behaviors – beyond static object recognition – may be needed to unveil the functional organization of the mouse visual cortex."],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63515"],["dc.language.iso","en"],["dc.notes.preprint","yes"],["dc.relation.conference","33rd Conference on Neural Information Processing Systems (NeurIPS 2019)"],["dc.relation.eventend","2019-12-14"],["dc.relation.eventlocation","), Vancouver, Canada"],["dc.relation.eventstart","2019-12-08"],["dc.relation.iserratumof","yes"],["dc.title","How well do deep neural networks trained on object recognition characterize the mouse visual system?"],["dc.type","conference_paper"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details2021Journal Article Research Paper [["dc.bibliographiccitation.firstpage","e1009028"],["dc.bibliographiccitation.issue","6"],["dc.bibliographiccitation.journal","PLoS Computational Biology"],["dc.bibliographiccitation.volume","17"],["dc.contributor.author","Burg, Max F."],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Denfield, George H."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2021-08-12T07:45:37Z"],["dc.date.available","2021-08-12T07:45:37Z"],["dc.date.issued","2021"],["dc.description.abstract","Divisive normalization (DN) is a prominent computational building block in the brain that has been proposed as a canonical cortical operation. Numerous experimental studies have verified its importance for capturing nonlinear neural response properties to simple, artificial stimuli, and computational studies suggest that DN is also an important component for processing natural stimuli. However, we lack quantitative models of DN that are directly informed by measurements of spiking responses in the brain and applicable to arbitrary stimuli. Here, we propose a DN model that is applicable to arbitrary input images. We test its ability to predict how neurons in macaque primary visual cortex (V1) respond to natural images, with a focus on nonlinear response properties within the classical receptive field. Our model consists of one layer of subunits followed by learned orientation-specific DN. It outperforms linear-nonlinear and wavelet-based feature representations and makes a significant step towards the performance of state-of-the-art convolutional neural network (CNN) models. Unlike deep CNNs, our compact DN model offers a direct interpretation of the nature of normalization. By inspecting the learned normalization pool of our model, we gained insights into a long-standing question about the tuning properties of DN that update the current textbook description: we found that within the receptive field oriented features were normalized preferentially by features with similar orientation rather than non-specifically as currently assumed."],["dc.description.sponsorship","Open-Access-Publikationsfonds 2021"],["dc.identifier.doi","10.1371/journal.pcbi.1009028"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/88509"],["dc.language.iso","en"],["dc.notes.intern","DOI Import GROB-448"],["dc.relation.eissn","1553-7358"],["dc.relation.orgunit","Institut für Informatik"],["dc.rights","CC BY 4.0"],["dc.title","Learning divisive normalization in primary visual cortex"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.subtype","original_ja"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI