Options
Ecker, Alexander S.
Loading...
Preferred name
Ecker, Alexander S.
Official Name
Ecker, Alexander S.
Alternative Name
Ecker, A. S.
Ecker, Alexander
Ecker, A.
Main Affiliation
Email
ecker@cs.uni-goettingen.de
ORCID
Researcher ID
A-5184-2010
Now showing 1 - 2 of 2
2020-06-12Journal Article Research Paper [["dc.bibliographiccitation.firstpage","1"],["dc.bibliographiccitation.issue","183"],["dc.bibliographiccitation.journal","Journal of Machine Learning Research"],["dc.bibliographiccitation.lastpage","61"],["dc.bibliographiccitation.volume","22"],["dc.contributor.author","Weis, Marissa A."],["dc.contributor.author","Chitta, Kashyap"],["dc.contributor.author","Sharma, Yash"],["dc.contributor.author","Brendel, Wieland"],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Geiger, Andreas"],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2021-11-01T07:45:24Z"],["dc.date.available","2021-11-01T07:45:24Z"],["dc.date.issued","2020-06-12"],["dc.description.abstract","Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding. Recently, several methods have been proposed for unsupervised learning of object-centric representations. However, since these models were evaluated on different downstream tasks, it remains unclear how they compare in terms of basic perceptual abilities such as detection, figure-ground segmentation and tracking of objects. To close this gap, we design a benchmark with four data sets of varying complexity and seven additional test sets featuring challenging tracking scenarios relevant for natural videos. Using this benchmark, we compare the perceptual abilities of four object-centric approaches: ViMON, a video-extension of MONet, based on recurrent spatial attention, OP3, which exploits clustering via spatial mixture models, as well as TBA and SCALOR, which use explicit factorization via spatial transformers. Our results suggest that the architectures with unconstrained latent representations learn more powerful representations in terms of object detection, segmentation and tracking than the spatial transformer based architectures. We also observe that none of the methods are able to gracefully handle the most challenging tracking scenarios despite their synthetic nature, suggesting that our benchmark may provide fruitful guidance towards learning more robust object-centric video representations."],["dc.identifier.arxiv","2006.07034v2"],["dc.identifier.isi","000700307700001"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/92495"],["dc.title","Benchmarking Unsupervised Object Representations for Video Sequences"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.subtype","original_ja"],["dspace.entity.type","Publication"]]Details WOS2018Book Chapter [["dc.bibliographiccitation.firstpage","225"],["dc.bibliographiccitation.lastpage","240"],["dc.bibliographiccitation.volume","11216"],["dc.contributor.author","Cadena, Santiago A."],["dc.contributor.author","Weis, Marissa A."],["dc.contributor.author","Gatys, Leon A."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Weiss, Y."],["dc.contributor.editor","Ferrari, V."],["dc.contributor.editor","Hebert, M."],["dc.contributor.editor","Sminchisescu, C."],["dc.date.accessioned","2020-03-18T14:36:05Z"],["dc.date.available","2020-03-18T14:36:05Z"],["dc.date.issued","2018"],["dc.description.abstract","Visualizing features in deep neural networks (DNNs) can help understanding their computations. Many previous studies aimed to visualize the selectivity of individual units by finding meaningful images that maximize their activation. However, comparably little attention has been paid to visualizing to what image transformations units in DNNs are invariant. Here we propose a method to discover invariances in the responses of hidden layer units of deep neural networks. Our approach is based on simultaneously searching for a batch of images that strongly activate a unit while at the same time being as distinct from each other as possible. We find that even early convolutional layers in VGG-19 exhibit various forms of response invariance: near-perfect phase invariance in some units and invariance to local diffeomorphic transformations in others. At the same time, we uncover representational differences with ResNet-50 in its corresponding layers. We conclude that invariance transformations are a major computational component learned by DNNs and we provide a systematic method to study them."],["dc.identifier.doi","10.1007/978-3-030-01258-8_14"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63368"],["dc.language.iso","en"],["dc.publisher","Springer"],["dc.publisher.place","Cham"],["dc.relation.crisseries","Lecture Notes in Computer Science"],["dc.relation.doi","10.1007/978-3-030-01258-8"],["dc.relation.eisbn","978-3-030-01258-8"],["dc.relation.eissn","1611-3349"],["dc.relation.isbn","978-3-030-01257-1"],["dc.relation.ispartof","Computer Vision – ECCV 2018"],["dc.relation.ispartofseries","Lecture Notes in Computer Science;"],["dc.relation.issn","0302-9743"],["dc.title","Diverse Feature Visualizations Reveal Invariances in Early Layers of Deep Neural Networks"],["dc.type","book_chapter"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI