Options
Ecker, Alexander S.
Loading...
Preferred name
Ecker, Alexander S.
Official Name
Ecker, Alexander S.
Alternative Name
Ecker, A. S.
Ecker, Alexander
Ecker, A.
Main Affiliation
Email
ecker@cs.uni-goettingen.de
ORCID
Researcher ID
A-5184-2010
Now showing 1 - 3 of 3
2020-06-12Journal Article Research Paper [["dc.bibliographiccitation.firstpage","1"],["dc.bibliographiccitation.issue","183"],["dc.bibliographiccitation.journal","Journal of Machine Learning Research"],["dc.bibliographiccitation.lastpage","61"],["dc.bibliographiccitation.volume","22"],["dc.contributor.author","Weis, Marissa A."],["dc.contributor.author","Chitta, Kashyap"],["dc.contributor.author","Sharma, Yash"],["dc.contributor.author","Brendel, Wieland"],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Geiger, Andreas"],["dc.contributor.author","Ecker, Alexander S."],["dc.date.accessioned","2021-11-01T07:45:24Z"],["dc.date.available","2021-11-01T07:45:24Z"],["dc.date.issued","2020-06-12"],["dc.description.abstract","Perceiving the world in terms of objects and tracking them through time is a crucial prerequisite for reasoning and scene understanding. Recently, several methods have been proposed for unsupervised learning of object-centric representations. However, since these models were evaluated on different downstream tasks, it remains unclear how they compare in terms of basic perceptual abilities such as detection, figure-ground segmentation and tracking of objects. To close this gap, we design a benchmark with four data sets of varying complexity and seven additional test sets featuring challenging tracking scenarios relevant for natural videos. Using this benchmark, we compare the perceptual abilities of four object-centric approaches: ViMON, a video-extension of MONet, based on recurrent spatial attention, OP3, which exploits clustering via spatial mixture models, as well as TBA and SCALOR, which use explicit factorization via spatial transformers. Our results suggest that the architectures with unconstrained latent representations learn more powerful representations in terms of object detection, segmentation and tracking than the spatial transformer based architectures. We also observe that none of the methods are able to gracefully handle the most challenging tracking scenarios despite their synthetic nature, suggesting that our benchmark may provide fruitful guidance towards learning more robust object-centric video representations."],["dc.identifier.arxiv","2006.07034v2"],["dc.identifier.isi","000700307700001"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/92495"],["dc.title","Benchmarking Unsupervised Object Representations for Video Sequences"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.subtype","original_ja"],["dspace.entity.type","Publication"]]Details WOS2018Journal Article [["dc.bibliographiccitation.firstpage","800"],["dc.bibliographiccitation.issue","10"],["dc.bibliographiccitation.journal","Journal of Vision"],["dc.bibliographiccitation.volume","18"],["dc.contributor.author","Funke, Christina"],["dc.contributor.author","Borowski, Judy"],["dc.contributor.author","Wallis, Thomas"],["dc.contributor.author","Brendel, Wieland"],["dc.contributor.author","Ecker, Alexander"],["dc.contributor.author","Bethge, Matthias"],["dc.date.accessioned","2020-03-20T09:07:53Z"],["dc.date.available","2020-03-20T09:07:53Z"],["dc.date.issued","2018"],["dc.description.abstract","Given the recent success of machine vision algorithms in solving complex visual inference tasks, it becomes increasingly challenging to find tasks for which machines are still outperformed by humans. We seek to identify such tasks and test them under controlled settings. Here we compare human and machine performance in one candidate task: discriminating closed and open contours. We generated contours using simple lines of varying length and angle, and minimised statistical regularities that could provide cues. It has been shown that DNNs trained for object recognition are very sensitive to texture cues (Gatys et al., 2015). We use this insight to maximize the difficulty of the task for the DNN by adding random natural images to the background. Humans performed a 2IFC task discriminating closed and open contours (100 ms presentation) with and without background images. We trained a readout network to perform the same task using the pre-trained features of the VGG-19 network. With no background image (contours black on grey), humans reach a performance of 92% correct on the task, dropping to 71% when background images are present. Surprisingly, the model's performance is very similar to humans, with 91% dropping to 64% with background. One contributing factor for why human performance drops with background images is that dark lines become difficult to discriminate from the natural images, whose average pixel values are dark. Changing the polarity of the lines from dark to light improved human performance (96% without and 82% with background image) but not model performance (88% without to 64% with background image), indicating that humans could largely ignore the background image whereas the model could not. These results show that the human visual system is able to discriminate closed from open contours in a more robust fashion than transfer learning from the VGG network."],["dc.identifier.doi","10.1167/18.10.800"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63369"],["dc.language.iso","en"],["dc.relation.issn","1534-7362"],["dc.title","Comparing the ability of humans and DNNs to recognise closed contours in cluttered images"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dspace.entity.type","Publication"]]Details DOI2019Preprint [["dc.contributor.author","Michaelis, Claudio"],["dc.contributor.author","Mitzkus, Benjamin"],["dc.contributor.author","Geirhos, Robert"],["dc.contributor.author","Rusak, Evgenia"],["dc.contributor.author","Bringmann, Oliver"],["dc.contributor.author","Ecker, Alexander S."],["dc.contributor.author","Bethge, Matthias"],["dc.contributor.author","Brendel, Wieland"],["dc.date.accessioned","2020-03-18T13:57:14Z"],["dc.date.available","2020-03-18T13:57:14Z"],["dc.date.issued","2019"],["dc.description.abstract","The ability to detect objects regardless of image distortions or weather conditions is crucial for real-world applications of deep learning like autonomous driving. We here provide an easy-to-use benchmark to assess how object detection models perform when image quality degrades. The three resulting benchmark datasets, termed Pascal-C, Coco-C and Cityscapes-C, contain a large variety of image corruptions. We show that a range of standard object detection models suffer a severe performance loss on corrupted images (down to 30-60% of the original performance). However, a simple data augmentation trick - stylizing the training images - leads to a substantial increase in robustness across corruption type, severity and dataset. We envision our comprehensive benchmark to track future progress towards building robust object detection models. Benchmark, code and data are available at: http://github.com/bethgelab/robust-detection-benchmark"],["dc.identifier.arxiv","1907.07484v1"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/63361"],["dc.language.iso","en"],["dc.title","Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming"],["dc.type","preprint"],["dc.type.internalPublication","unknown"],["dspace.entity.type","Publication"]]Details