Now showing 1 - 10 of 14
  • 2017Conference Paper
    [["dc.bibliographiccitation.firstpage","3730"],["dc.bibliographiccitation.lastpage","3738"],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Hertzmann, Aaron"],["dc.contributor.author","Shechtman, Eli"],["dc.date.accessioned","2020-03-31T14:43:41Z"],["dc.date.available","2020-03-31T14:43:41Z"],["dc.date.issued","2017"],["dc.description.abstract","Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer."],["dc.identifier.doi","10.1109/CVPR.2017.397"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63421"],["dc.language.iso","en"],["dc.relation.conference","2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)"],["dc.relation.eventend","2017-07-26"],["dc.relation.eventlocation","Honolulu, HI, USA"],["dc.relation.eventstart","2017-07-21"],["dc.relation.isbn","978-1-5386-0457-1"],["dc.relation.ispartof","2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)"],["dc.relation.issn","1063-6919"],["dc.title","Controlling Perceptual Factors in Neural Style Transfer"],["dc.type","conference_paper"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2017Journal Article
    [["dc.bibliographiccitation.artnumber","5"],["dc.bibliographiccitation.firstpage","1"],["dc.bibliographiccitation.issue","12"],["dc.bibliographiccitation.journal","Journal of Vision"],["dc.bibliographiccitation.lastpage","29"],["dc.bibliographiccitation.volume","17"],["dc.contributor.author","Wallis, Thomas S. A."],["dc.contributor.author","Funke, Christina M."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Wichmann, Felix A."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-03-31T14:26:39Z"],["dc.date.available","2020-03-31T14:26:39Z"],["dc.date.issued","2017"],["dc.description.abstract","Our visual environment is full of texture-\"stuff\" like cloth, bark, or gravel as distinct from \"things\" like dresses, trees, or paths-and humans are adept at perceiving subtle variations in material properties. To investigate image features important for texture perception, we psychophysically compare a recent parametric model of texture appearance (convolutional neural network [CNN] model) that uses the features encoded by a deep CNN (VGG-19) with two other models: the venerable Portilla and Simoncelli model and an extension of the CNN model in which the power spectrum is additionally matched. Observers discriminated model-generated textures from original natural textures in a spatial three-alternative oddity paradigm under two viewing conditions: when test patches were briefly presented to the near-periphery (\"parafoveal\") and when observers were able to make eye movements to all three patches (\"inspection\"). Under parafoveal viewing, observers were unable to discriminate 10 of 12 original images from CNN model images, and remarkably, the simpler Portilla and Simoncelli model performed slightly better than the CNN model (11 textures). Under foveal inspection, matching CNN features captured appearance substantially better than the Portilla and Simoncelli model (nine compared to four textures), and including the power spectrum improved appearance matching for two of the three remaining textures. None of the models we test here could produce indiscriminable images for one of the 12 textures under the inspection condition. While deep CNN (VGG-19) features can often be used to synthesize textures that humans cannot discriminate from natural textures, there is currently no uniformly best model for all textures and viewing conditions."],["dc.identifier.doi","10.1167/17.12.5"],["dc.identifier.pmid","28983571"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63417"],["dc.language.iso","en"],["dc.relation.eissn","1534-7362"],["dc.relation.issn","1534-7362"],["dc.title","A parametric texture model based on deep convolutional features closely matches texture appearance for humans"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2015Journal Article
    [["dc.bibliographiccitation.artnumber","062707"],["dc.bibliographiccitation.issue","6"],["dc.bibliographiccitation.journal","Physical Review. E"],["dc.bibliographiccitation.volume","91"],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Tchumatchenko, Tatjana"],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-04-01T11:34:52Z"],["dc.date.available","2020-04-01T11:34:52Z"],["dc.date.issued","2015"],["dc.description.abstract","Synaptic unreliability is one of the major sources of biophysical noise in the brain. In the context of neural information processing, it is a central question how neural systems can afford this unreliability. Here we examine how synaptic noise affects signal transmission in cortical circuits, where excitation and inhibition are thought to be tightly balanced. Surprisingly, we find that in this balanced state synaptic response variability actually facilitates information transmission, rather than impairing it. In particular, the transmission of fast-varying signals benefits from synaptic noise, as it instantaneously increases the amount of information shared between presynaptic signal and postsynaptic current. Furthermore we show that the beneficial effect of noise is based on a very general mechanism which contrary to stochastic resonance does not reach an optimum at a finite noise level."],["dc.identifier.doi","10.1103/PhysRevE.91.062707"],["dc.identifier.pmid","26172736"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63430"],["dc.language.iso","en"],["dc.relation.eissn","1550-2376"],["dc.relation.issn","1539-3755"],["dc.title","Synaptic unreliability facilitates information transmission in balanced cortical populations"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2019Journal Article
    [["dc.bibliographiccitation.artnumber","e1006897"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","PLoS Computational Biology"],["dc.bibliographiccitation.volume","15"],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Denfield, George H."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2020-03-18T10:51:24Z"],["dc.date.available","2020-03-18T10:51:24Z"],["dc.date.issued","2019"],["dc.description.abstract","Despite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have emerged for modeling these nonlinear computations: transfer learning from artificial neural networks trained on object recognition and data-driven convolutional neural network models trained end-to-end on large populations of neurons. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. We found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals."],["dc.identifier.doi","10.1371/journal.pcbi.1006897"],["dc.identifier.pmid","31013278"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63344"],["dc.language.iso","en"],["dc.relation.eissn","1553-7358"],["dc.relation.issn","1553-7358"],["dc.title","Deep convolutional models improve predictions of macaque V1 responses to natural images"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2019Conference Paper
    [["dc.contributor.author","Günthner, Max F."],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Denfield, George H."],["dc.contributor.author","Walker, Edgar Y."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Tolias, Andreas S."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2020-03-18T14:29:36Z"],["dc.date.available","2020-03-18T14:29:36Z"],["dc.date.issued","2019"],["dc.description.abstract","Divisive normalization (DN) has been suggested as a canonical computation implemented throughout the neocortex. In primary visual cortex (V1), DN was found to be crucial to explain nonlinear response properties of neurons when presented with superpositions of simple stimuli such as gratings. Based on such studies, it is currently assumed that neuronal responses to stimuli restricted to the neuron's classical receptive field (RF) are normalized by a non-specific pool of nearby neurons with similar RF locations. However, it is currently unknown how DN operates in V1 when processing natural inputs. Here, we investigated DN in monkey V1 under stimulation with natural images with an end-to-end trainable model that learns the pool of normalizing neurons and the magnitude of their contribution directly from the data. Taking advantage of our model's direct interpretable view of V1 computation, we found that oriented features were normalized preferentially by features with similar orientation preference rather than non-specifically. Our model's accuracy was competitive with state-of-the-art black-box models, suggesting that rectification, DN, and a combination of subunits resulting from DN are sufficient to account for V1 responses to localized stimuli. Thus, our work significantly advances our understanding of V1 function."],["dc.identifier.doi","10.32470/CCN.2019.1211-0"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63367"],["dc.language.iso","en"],["dc.notes.preprint","yes"],["dc.relation.conference","Conference on Cognitive Computational Neuroscience 2019"],["dc.relation.eventend","2019-09-16"],["dc.relation.eventlocation","Berlin, Germany"],["dc.relation.eventstart","2019-09-13"],["dc.relation.iserratumof","yes"],["dc.title","Learning Divisive Normalization in Primary Visual Cortex"],["dc.type","conference_paper"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2016Preprint
    [["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-04-03T10:48:29Z"],["dc.date.available","2020-04-03T10:48:29Z"],["dc.date.issued","2016"],["dc.description.abstract","We introduce a new model of natural textures based on the feature spaces of convolutional neural networks optimised for object recognition. Samples from the model are of high perceptual quality demonstrating the generative power of neural networks trained in a purely discriminative fashion. Within the model, textures are represented by the correlations between feature maps in several layers of the network. We show that across layers the texture representations increasingly capture the statistical properties of natural images while making object information more and more explicit. Extending this framework to texture transfer, we introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new artistic imagery that combines the content of an arbitrary photograph with the appearance of numerous well-known artworks, thus offering a path towards an algorithmic understanding of how humans create and perceive artistic imagery."],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63583"],["dc.language.iso","en"],["dc.title","Texture Modelling Using Convolutional Neural Networks"],["dc.type","preprint"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details
  • 2016Conference Paper
    [["dc.bibliographiccitation.firstpage","175"],["dc.bibliographiccitation.journal","Perception"],["dc.bibliographiccitation.lastpage","176"],["dc.bibliographiccitation.volume","45"],["dc.contributor.author","Wallis, Thomas S."],["dc.contributor.author","Funke, Christina M."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Wichmann, Felix A."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-04-02T13:37:09Z"],["dc.date.available","2020-04-02T13:37:09Z"],["dc.date.issued","2016"],["dc.description.abstract","Distortions of image structure can go unnoticed in the visual periphery, and objects can be harder to identify (crowding). Is it possible to create equivalence classes of images that discard and distort image structure but appear the same as the original images? Here we use deep convolutional neural networks (CNNs) to study peripheral representations that are texture-like, in that summary statistics within some pooling region are preserved but local position is lost. Building on our previous work generating textures by matching CNN responses, we first show that while CNN textures are difficult to discriminate from many natural textures, they fail to match the appearance of scenes at a range of eccentricities and sizes. Because texturising scenes discards long range correlations over too large an area, we next generate images that match CNN features within overlapping pooling regions (see also Freeman and Simoncelli, 2011). These images are more difficult to discriminate from the original scenes, indicating that constraining features by their neighbouring pooling regions provides greater perceptual fidelity. Our ultimate goal is to determine the minimal set of deep CNN features that produce metameric stimuli by varying the feature complexity and pooling regions used to represent the image."],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63537"],["dc.language.iso","en"],["dc.relation.eventend","2016"],["dc.relation.eventstart","2016"],["dc.title","Towards matching the peripheral visual appearance of arbitrary scenes using deep convolutional neural networks"],["dc.type","conference_paper"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details
  • 2017Journal Article
    [["dc.bibliographiccitation.firstpage","178"],["dc.bibliographiccitation.journal","Current Opinion in Neurobiology"],["dc.bibliographiccitation.lastpage","186"],["dc.bibliographiccitation.volume","46"],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-03-31T14:31:11Z"],["dc.date.available","2020-03-31T14:31:11Z"],["dc.date.issued","2017"],["dc.description.abstract","Although the study of biological vision and computer vision attempt to understand powerful visual information processing from different angles, they have a long history of informing each other. Recent advances in texture synthesis that were motivated by visual neuroscience have led to a substantial advance in image synthesis and manipulation in computer vision using convolutional neural networks (CNNs). Here, we review these recent advances and discuss how they can in turn inspire new research in visual perception and computational neuroscience."],["dc.identifier.doi","10.1016/j.conb.2017.08.019"],["dc.identifier.pmid","28926765"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63418"],["dc.language.iso","en"],["dc.relation.eissn","1873-6882"],["dc.relation.issn","0959-4388"],["dc.title","Texture and art with deep neural networks"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2018Preprint
    [["dc.contributor.author","Wallis, Thomas S. A."],["dc.contributor.author","Funke, Christina M."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Wichmann, Felix A."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-03-18T13:14:03Z"],["dc.date.available","2020-03-18T13:14:03Z"],["dc.date.issued","2018"],["dc.description.abstract","We subjectively perceive our visual field with high fidelity, yet large peripheral distortions can go unnoticed and peripheral objects can be difficult to identify (crowding). A recent paper proposed a model of the mid-level ventral visual stream in which neural responses were averaged over an area of space that increased as a function of eccentricity (scaling). Human participants could not discriminate synthesised model images from each other (they were metamers) when scaling was about half the retinal eccentricity. This result implicated ventral visual area V2 and approximated “Bouma’s Law” of crowding. It has subsequently been interpreted as a link between crowding zones, receptive field scaling, and our rich perceptual experience. However, participants in this experiment never saw the original images. We find that participants can easily discriminate real and model-generated images at V2 scaling. Lower scale factors than even V1 receptive fields may be required to generate metamers. Efficiently explaining why scenes look as they do may require incorporating segmentation processes and global organisational constraints in addition to local pooling."],["dc.identifier.doi","10.1101/378521"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63348"],["dc.language.iso","en"],["dc.title","Image content is more important than Bouma’s Law for scene metamers"],["dc.type","preprint"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2017Preprint
    [["dc.contributor.author","Funke, Christina M."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-03-31T14:46:52Z"],["dc.date.available","2020-03-31T14:46:52Z"],["dc.date.issued","2017"],["dc.description.abstract","Here we present a parametric model for dynamic textures. The model is based on spatiotemporal summary statistics computed from the feature representations of a Convolutional Neural Network (CNN) trained on object recognition. We demonstrate how the model can be used to synthesise new samples of dynamic textures and to predict motion in simple movies."],["dc.format.extent","9"],["dc.identifier.arxiv","1702.07006v1"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63422"],["dc.language.iso","en"],["dc.title","Synthesising Dynamic Textures using Convolutional Neural Networks"],["dc.type","preprint"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]
    Details