Options
Wörgötter, Florentin Andreas
Loading...
Preferred name
Wörgötter, Florentin Andreas
Official Name
Wörgötter, Florentin Andreas
Alternative Name
Wörgötter, Florentin A.
Worgotter, Florentin
Wörgötter, Florentin
Wörgötter, F.
Woergoetter, Florentin Andreas
Worgotter, Florentin A.
Worgotter, F. A.
Wörgötter, F. A.
Woergoetter, Florentin A.
Woergoetter, F. A.
Woergoetter, Florentin
Woergoetter, F.
Worgotter, F.
Worgotter, Florentin Andreas
Main Affiliation
Now showing 1 - 10 of 152
2020Journal Article [["dc.bibliographiccitation.firstpage","153"],["dc.bibliographiccitation.journal","Neural Networks"],["dc.bibliographiccitation.lastpage","162"],["dc.bibliographiccitation.volume","123"],["dc.contributor.author","Herzog, Sebastian"],["dc.contributor.author","Tetzlaff, Christian"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2020-12-10T15:20:27Z"],["dc.date.available","2020-12-10T15:20:27Z"],["dc.date.issued","2020"],["dc.identifier.doi","10.1016/j.neunet.2019.12.004"],["dc.identifier.issn","0893-6080"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/72672"],["dc.language.iso","en"],["dc.notes.intern","DOI Import GROB-354"],["dc.title","Evolving artificial neural networks with feedback"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details DOI2006Journal Article Editorial Contribution (Editorial, Introduction, Epilogue) [["dc.bibliographiccitation.firstpage","5"],["dc.bibliographiccitation.issue","1"],["dc.bibliographiccitation.journal","International Journal of Computer Vision"],["dc.bibliographiccitation.lastpage","7"],["dc.bibliographiccitation.volume","72"],["dc.contributor.author","Krüger, Norbert"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","van Hulle, Marc M."],["dc.date.accessioned","2017-09-07T11:45:25Z"],["dc.date.available","2017-09-07T11:45:25Z"],["dc.date.issued","2006"],["dc.identifier.doi","10.1007/s11263-006-8889-2"],["dc.identifier.gro","3151771"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8597"],["dc.language.iso","en"],["dc.notes.status","public"],["dc.notes.submitter","chake"],["dc.relation.issn","0920-5691"],["dc.title","Editorial: ECOVISION: Challenges in Early-Cognitive Vision"],["dc.type","journal_article"],["dc.type.internalPublication","unknown"],["dc.type.peerReviewed","no"],["dc.type.subtype","editorial_ja"],["dspace.entity.type","Publication"]]Details DOI2011Journal Article [["dc.bibliographiccitation.firstpage","910"],["dc.bibliographiccitation.issue","11"],["dc.bibliographiccitation.journal","Robotics and Autonomous Systems"],["dc.bibliographiccitation.lastpage","922"],["dc.bibliographiccitation.volume","59"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Nemec, Bojan"],["dc.contributor.author","Ude, Ales"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:50:32Z"],["dc.date.available","2018-11-07T08:50:32Z"],["dc.date.issued","2011"],["dc.description.abstract","When describing robot motion with dynamic movement primitives (DMPs), goal (trajectory endpoint), shape and temporal scaling parameters are used. In reinforcement learning with DMPs, usually goals and temporal scaling parameters are predefined and only the weights for shaping a DMP are learned. Many tasks, however, exist where the best goal position is not a priori known, requiring to learn it. Thus, here we specifically address the question of how to simultaneously combine goal and shape parameter learning. This is a difficult problem because learning of both parameters could easily interfere in a destructive way. We apply value function approximation techniques for goal learning and direct policy search methods for shape learning. Specifically, we use \"policy improvement with path integrals\" and \"natural actor critic\" for the policy search. We solve a learning-to-pour-liquid task in simulations as well as using a Pa10 robot arm. Results for learning from scratch, learning initialized by human demonstration, as well as for modifying the tool for the learned DMPs are presented. We observe that the combination of goal and shape learning is stable and robust within large parameter regimes. Learning converges quickly even in the presence of disturbances, which makes this combined method suitable for robotic applications. (C) 2011 Elsevier B.V. All rights reserved."],["dc.identifier.doi","10.1016/j.robot.2011.07.004"],["dc.identifier.isi","000295912100005"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/21715"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Bv"],["dc.relation.issn","1872-793X"],["dc.relation.issn","0921-8890"],["dc.title","Learning to pour with a robot arm combining goal and shape learning for dynamic movement primitives"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI WOS2012Book Chapter [["dc.bibliographiccitation.firstpage","721"],["dc.bibliographiccitation.lastpage","725"],["dc.contributor.author","Vogelgesang, Jens"],["dc.contributor.author","Cozzi, Alex"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.editor","von der Malsburg, Christoph"],["dc.contributor.editor","von Seelen, Werner"],["dc.contributor.editor","Vorbrüggen, Jan C."],["dc.contributor.editor","Sendhoff, Bernhard"],["dc.date.accessioned","2017-09-07T11:45:30Z"],["dc.date.available","2017-09-07T11:45:30Z"],["dc.date.issued","2012"],["dc.description.abstract","While optical flow has been often proposed for guiding a moving robot, its computational complexity has mostly prevented its actual use in real applications. We describe a restricted form of optical flow algorithm, which can be parallelized on chain-like neuronal structures, combining simplicity and speed. In addition, this algorithm makes use of predicted motion trajectories in order to remove noise from the input images."],["dc.identifier.doi","10.1007/3-540-61510-5_122"],["dc.identifier.gro","3151793"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8621"],["dc.language.iso","en"],["dc.notes.status","public"],["dc.notes.submitter","chake"],["dc.publisher","Springer"],["dc.publisher.place","Berlin, Heidelberg"],["dc.relation.crisseries","Lecture Notes in Computer Science"],["dc.relation.isbn","978-3-540-61510-1"],["dc.relation.ispartof","Artificial Neural Networks — ICANN 96"],["dc.relation.ispartofseries","Lecture Notes in Computer Science"],["dc.relation.issn","0302-9743"],["dc.title","A parallel algorithm for depth perception from radial optical flow fields"],["dc.type","book_chapter"],["dc.type.internalPublication","unknown"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI2010Journal Article [["dc.bibliographiccitation.firstpage","379"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","International Journal of Humanoid Robotics"],["dc.bibliographiccitation.lastpage","405"],["dc.bibliographiccitation.volume","7"],["dc.contributor.author","Pugeault, Nicolas"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Krueger, Norbert"],["dc.date.accessioned","2018-11-07T08:39:45Z"],["dc.date.available","2018-11-07T08:39:45Z"],["dc.date.issued","2010"],["dc.description.abstract","We present a novel representation of visual information, based on local symbolic descriptors, that we call visual primitives. These primitives: (1) combine different visual modalities, (2) associate semantic to local scene information, and (3) reduce the bandwidth while increasing the predictability of the information exchanged across the system. This representation leads to the concept of early cognitive vision that we define as an intermediate level between dense, signal-based early vision and high-level cognitive vision. The framework's potential is demonstrated in several applications, in particular in the area of robotics and humanoid robotics, which are briefly outlined."],["dc.identifier.doi","10.1142/S0219843610002209"],["dc.identifier.isi","000284230200004"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/19072"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","World Scientific Publ Co Pte Ltd"],["dc.relation.issn","1793-6942"],["dc.relation.issn","0219-8436"],["dc.title","VISUAL PRIMITIVES: LOCAL, CONDENSED, SEMANTICALLY RICH VISUAL DESCRIPTORS AND THEIR APPLICATIONS IN ROBOTICS"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI WOS2004Journal Article [["dc.bibliographiccitation.firstpage","293"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Natural Computing"],["dc.bibliographiccitation.lastpage","321"],["dc.bibliographiccitation.volume","3"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Krüger, Norbert"],["dc.contributor.author","Pugeault, Nicolas"],["dc.contributor.author","Calow, Dirk"],["dc.contributor.author","Lappe, Markus"],["dc.contributor.author","Pauwels, Karl"],["dc.contributor.author","van Hulle, Marc M."],["dc.contributor.author","Tan, Sovira"],["dc.contributor.author","Johnston, Alan"],["dc.date.accessioned","2017-09-07T11:45:29Z"],["dc.date.available","2017-09-07T11:45:29Z"],["dc.date.issued","2004"],["dc.description.abstract","The goal of this review is to discuss different strategies employed by the visual system to limit data-flow and to focus data processing. These strategies can be hard-wired, like the eccentricity-dependent visual resolution or they can be dynamically changing like mechanisms of visual attention. We will ask to what degree such strategies are also useful in a computer vision context. Specifically we will discuss, how to adapt them to technical systems where the substrate for the computations is vastly different from that in the brain. It will become clear that most algorithmic principles, which are employed by natural visual systems, need to be reformulated to better fit to modern computer architectures. In addition, we will try to show that it is possible to employ multiple strategies in parallel to arrive at a flexible and robust computer vision system based on recurrent feedback loops and using information derived from the statistics of natural images."],["dc.identifier.doi","10.1023/b:naco.0000036817.38320.fe"],["dc.identifier.gro","3151779"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8605"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.notes.submitter","chake"],["dc.relation.issn","1567-7818"],["dc.title","Early Cognitive Vision: Using Gestalt-Laws for Task-Dependent, Active Image-Processing"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI2012Journal Article [["dc.bibliographiccitation.firstpage","145"],["dc.bibliographiccitation.issue","1"],["dc.bibliographiccitation.journal","IEEE Transactions on Robotics"],["dc.bibliographiccitation.lastpage","157"],["dc.bibliographiccitation.volume","28"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Ning, KeJun"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T09:13:47Z"],["dc.date.available","2018-11-07T09:13:47Z"],["dc.date.issued","2012"],["dc.description.abstract","The generation of complex movement patterns, in particular, in cases where one needs to smoothly and accurately join trajectories in a dynamic way, is an important problem in robotics. This paper presents a novel joining method that is based on the modification of the original dynamic movement primitive formulation. The new method can reproduce the target trajectory with high accuracy regarding both the position and the velocity profile and produces smooth and natural transitions in position space, as well as in velocity space. The properties of the method are demonstrated by its application to simulated handwriting generation, which are also shown on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for joining of movement sequences, which has a high potential for all robotics applications where trajectory joining is required."],["dc.identifier.doi","10.1109/TRO.2011.2163863"],["dc.identifier.isi","000300188300012"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/27247"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Ieee-inst Electrical Electronics Engineers Inc"],["dc.relation.issn","1552-3098"],["dc.title","Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI WOS2003Journal Article [["dc.bibliographiccitation.firstpage","865"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","Neural Computation"],["dc.bibliographiccitation.lastpage","884"],["dc.bibliographiccitation.volume","15"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Ferber, Christian von"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:26Z"],["dc.date.available","2017-09-07T11:45:26Z"],["dc.date.issued","2003"],["dc.description.abstract","In “Isotropic Sequence Order Learning” (pp. 831–864 in this issue), we introduced a novel algorithm for temporal sequence learning (ISO learning). Here, we embed this algorithm into a formal nonevaluating (teacher free) environment, which establishes a sensor-motor feedback. The system is initially guided by a fixed reflex reaction, which has the objective disadvantage that it can react only after a disturbance has occurred. ISO learning eliminates this disadvantage by replacing the reflex-loop reactions with earlier anticipatory actions. In this article, we analytically demonstrate that this process can be understood in terms of control theory, showing that the system learns the inverse controller of its own reflex. Thereby, this system is able to learn a simple form of feedforward motor control."],["dc.identifier.doi","10.1162/08997660360581930"],["dc.identifier.gro","3151766"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8591"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.notes.submitter","chake"],["dc.relation.issn","0899-7667"],["dc.title","ISO Learning Approximates a Solution to the Inverse-Controller Problem in an Unsupervised Behavioral Paradigm"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI1998Journal Article [["dc.bibliographiccitation.firstpage","329"],["dc.bibliographiccitation.issue","5"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","336"],["dc.bibliographiccitation.volume","78"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Cozzi, Alex"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:26Z"],["dc.date.available","2017-09-07T11:45:26Z"],["dc.date.issued","1998"],["dc.description.abstract","In a stereoscopic system, both eyes or cameras have a slightly different view. As a consequence, small variations between the projected images exist ('disparities') which are spatially evaluated in order to retrieve depth information (Sanger 1988; Fleet et al. 1991). A strong similarity exists between the analysis of visual disparities and the determination of the azimuth of a sound source (Wagner and Frost 1993). The direction of the sound is thereby determined from the temporal delay between the left and right ear signals (Konishi and Sullivan 1986). Similarly, here we transpose the spatially defined problem of disparity analysis into the temporal domain and utilize two resonators implemented in the form of causal (electronic) filters to determine the disparity as local temporal phase differences between the left and right filter responses. This approach permits real-time analysis and can be solved analytically for a step function contrast change, which is an important case in all real-world applications. The proposed theoretical framework for spatial depth retrieval directly utilizes a temporal algorithm borrowed from auditory signal analysis. Thus, the suggested similarity between the visual and the auditory system in the brain (Wagner and Frost 1993) finds its analogy here at the algorithmical level. We will compare the results from the temporal resonance algorithm with those obtained from several other techniques like cross-correlation or spatial phase-based disparity estimation showing that the novel algorithm achieves performances similar to the 'classical' approaches using much lower computational resources."],["dc.identifier.doi","10.1007/s004220050437"],["dc.identifier.gro","3151781"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8608"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.notes.submitter","chake"],["dc.relation.issn","0340-1200"],["dc.title","How to \"hear\" visual disparities: real-time stereoscopic spatial depth analysis using temporal resonance"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI1991Journal Article [["dc.bibliographiccitation.journal","Journal of Neurophysiology"],["dc.bibliographiccitation.volume","65"],["dc.contributor.author","Wörgötter, Florentin"],["dc.contributor.author","Holt, G."],["dc.date.accessioned","2017-11-22T13:36:55Z"],["dc.date.available","2017-11-22T13:36:55Z"],["dc.date.issued","1991"],["dc.format.extent","494–510"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/10199"],["dc.language.iso","en"],["dc.notes.status","new -primates"],["dc.title","Spatio-temporal mechanisms in receptive fields of visual cortical simple cells: A model"],["dc.type","journal_article"],["dc.type.internalPublication","unknown"],["dspace.entity.type","Publication"]]Details