Now showing 1 - 10 of 215
  • 2015Journal Article
    [["dc.bibliographiccitation.artnumber","91"],["dc.bibliographiccitation.firstpage","1"],["dc.bibliographiccitation.journal","Frontiers in neural circuits"],["dc.bibliographiccitation.lastpage","19"],["dc.bibliographiccitation.volume","9"],["dc.contributor.author","Lyzwa, Dominika"],["dc.contributor.author","Herrmann, J. Michael"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2019-07-09T11:42:07Z"],["dc.date.available","2019-07-09T11:42:07Z"],["dc.date.issued","2015"],["dc.description.abstract","How complex natural sounds are represented by the main converging center of the auditory midbrain, the central inferior colliculus, is an open question. We applied neural discrimination to determine the variation of detailed encoding of individual vocalizations across the best frequency gradient of the central inferior colliculus. The analysis was based on collective responses from several neurons. These multi-unit spike trains were recorded from guinea pigs exposed to a spectrotemporally rich set of eleven species-specific vocalizations. Spike trains of disparate units from the same recording were combined in order to investigate whether groups of multi-unit clusters represent the whole set of vocalizations more reliably than only one unit, and whether temporal response correlations between them facilitate an unambiguous neural representation of the vocalizations. We found a spatial distribution of the capability to accurately encode groups of vocalizations across the best frequency gradient. Different vocalizations are optimally discriminated at different locations of the best frequency gradient. Furthermore, groups of a few multi-unit clusters yield improved discrimination over only one multi-unit cluster between all tested vocalizations. However, temporal response correlations between units do not yield better discrimination. Our study is based on a large set of units of simultaneously recorded responses from several guinea pigs and electrode insertion positions. Our findings suggest a broadly distributed code for behaviorally relevant vocalizations in the mammalian inferior colliculus. Responses from a few non-interacting units are sufficient to faithfully represent the whole set of studied vocalizations with diverse spectrotemporal properties."],["dc.identifier.doi","10.3389/fncir.2015.00091"],["dc.identifier.pmid","26869890"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/12889"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/58597"],["dc.language.iso","en"],["dc.notes.intern","Merged from goescholar"],["dc.relation.issn","1662-5110"],["dc.relation.orgunit","Fakultät für Physik"],["dc.rights","CC BY 4.0"],["dc.title","Natural Vocalizations in the Mammalian Inferior Colliculus are Broadly Encoded by a Small Number of Independent Multi-Units."],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2016Journal Article
    [["dc.bibliographiccitation.artnumber","39455"],["dc.bibliographiccitation.journal","Scientific Reports"],["dc.bibliographiccitation.volume","6"],["dc.contributor.author","Manoonpong, Poramate"],["dc.contributor.author","Petersen, Dennis"],["dc.contributor.author","Kovalev, Alexander"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Gorb, Stanislav N."],["dc.contributor.author","Spinner, Marlene"],["dc.contributor.author","Heepe, Lars"],["dc.date.accessioned","2018-11-07T10:04:23Z"],["dc.date.available","2018-11-07T10:04:23Z"],["dc.date.issued","2016"],["dc.description.abstract","Based on the principles of morphological computation, we propose a novel approach that exploits the interaction between a passive anisotropic scale-like material (e.g., shark skin) and a non-smooth substrate to enhance locomotion efficiency of a robot walking on inclines. Real robot experiments show that passive tribologically-enhanced surfaces of the robot belly or foot allow the robot to grip on specific surfaces and move effectively with reduced energy consumption. Supplementing the robot experiments, we investigated tribological properties of the shark skin as well as its mechanical stability. It shows high frictional anisotropy due to an array of sloped denticles. The orientation of the denticles to the underlying collagenous material also strongly influences their mechanical interlocking with the substrate. This study not only opens up a new way of achieving energy-efficient legged robot locomotion but also provides a better understanding of the functionalities and mechanical properties of anisotropic surfaces. That understanding will assist developing new types of material for other real-world applications."],["dc.identifier.doi","10.1038/srep39455"],["dc.identifier.isi","000390523500001"],["dc.identifier.pmid","28008936"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/14233"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/38687"],["dc.notes.intern","Merged from goescholar"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Nature Publishing Group"],["dc.relation.issn","2045-2322"],["dc.relation.orgunit","Fakultät für Physik"],["dc.rights","CC BY 4.0"],["dc.rights.uri","https://creativecommons.org/licenses/by/4.0"],["dc.title","Enhanced Locomotion Efficiency of a Bio-inspired Walking Robot using Contact Surfaces with Frictional Anisotropy"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS
  • 2020Journal Article
    [["dc.bibliographiccitation.firstpage","153"],["dc.bibliographiccitation.journal","Neural Networks"],["dc.bibliographiccitation.lastpage","162"],["dc.bibliographiccitation.volume","123"],["dc.contributor.author","Herzog, Sebastian"],["dc.contributor.author","Tetzlaff, Christian"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2020-12-10T15:20:27Z"],["dc.date.available","2020-12-10T15:20:27Z"],["dc.date.issued","2020"],["dc.identifier.doi","10.1016/j.neunet.2019.12.004"],["dc.identifier.issn","0893-6080"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/72672"],["dc.language.iso","en"],["dc.notes.intern","DOI Import GROB-354"],["dc.title","Evolving artificial neural networks with feedback"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2006Journal Article Editorial Contribution (Editorial, Introduction, Epilogue)
    [["dc.bibliographiccitation.firstpage","5"],["dc.bibliographiccitation.issue","1"],["dc.bibliographiccitation.journal","International Journal of Computer Vision"],["dc.bibliographiccitation.lastpage","7"],["dc.bibliographiccitation.volume","72"],["dc.contributor.author","Krüger, Norbert"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","van Hulle, Marc M."],["dc.date.accessioned","2017-09-07T11:45:25Z"],["dc.date.available","2017-09-07T11:45:25Z"],["dc.date.issued","2006"],["dc.identifier.doi","10.1007/s11263-006-8889-2"],["dc.identifier.gro","3151771"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8597"],["dc.language.iso","en"],["dc.notes.status","public"],["dc.notes.submitter","chake"],["dc.relation.issn","0920-5691"],["dc.title","Editorial: ECOVISION: Challenges in Early-Cognitive Vision"],["dc.type","journal_article"],["dc.type.internalPublication","unknown"],["dc.type.peerReviewed","no"],["dc.type.subtype","editorial_ja"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2011Journal Article
    [["dc.bibliographiccitation.firstpage","910"],["dc.bibliographiccitation.issue","11"],["dc.bibliographiccitation.journal","Robotics and Autonomous Systems"],["dc.bibliographiccitation.lastpage","922"],["dc.bibliographiccitation.volume","59"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Nemec, Bojan"],["dc.contributor.author","Ude, Ales"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:50:32Z"],["dc.date.available","2018-11-07T08:50:32Z"],["dc.date.issued","2011"],["dc.description.abstract","When describing robot motion with dynamic movement primitives (DMPs), goal (trajectory endpoint), shape and temporal scaling parameters are used. In reinforcement learning with DMPs, usually goals and temporal scaling parameters are predefined and only the weights for shaping a DMP are learned. Many tasks, however, exist where the best goal position is not a priori known, requiring to learn it. Thus, here we specifically address the question of how to simultaneously combine goal and shape parameter learning. This is a difficult problem because learning of both parameters could easily interfere in a destructive way. We apply value function approximation techniques for goal learning and direct policy search methods for shape learning. Specifically, we use \"policy improvement with path integrals\" and \"natural actor critic\" for the policy search. We solve a learning-to-pour-liquid task in simulations as well as using a Pa10 robot arm. Results for learning from scratch, learning initialized by human demonstration, as well as for modifying the tool for the learned DMPs are presented. We observe that the combination of goal and shape learning is stable and robust within large parameter regimes. Learning converges quickly even in the presence of disturbances, which makes this combined method suitable for robotic applications. (C) 2011 Elsevier B.V. All rights reserved."],["dc.identifier.doi","10.1016/j.robot.2011.07.004"],["dc.identifier.isi","000295912100005"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/21715"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Bv"],["dc.relation.issn","1872-793X"],["dc.relation.issn","0921-8890"],["dc.title","Learning to pour with a robot arm combining goal and shape learning for dynamic movement primitives"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI WOS
  • 2008Journal Article
    [["dc.bibliographiccitation.firstpage","481"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Journal of Computational Neuroscience"],["dc.bibliographiccitation.lastpage","500"],["dc.bibliographiccitation.volume","25"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Ainge, James A."],["dc.contributor.author","Dudchenko, Paul A."],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:08:36Z"],["dc.date.available","2018-11-07T11:08:36Z"],["dc.date.issued","2008"],["dc.description.abstract","Experiments with rodents demonstrate that visual cues play an important role in the control of hippocampal place cells and spatial navigation. Nevertheless, rats may also rely on auditory, olfactory and somatosensory stimuli for orientation. It is also known that rats can track odors or self-generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environmental and self-generated olfactory cues, together with a mixed navigation strategy, improves goal directed navigation."],["dc.description.sponsorship","Biotechnology and Biological Sciences Research Council [BB/C516079/1]"],["dc.identifier.doi","10.1007/s10827-008-0090-x"],["dc.identifier.isi","000259438100005"],["dc.identifier.pmid","18431616"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?goescholar/3067"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/52823"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","1573-6873"],["dc.relation.issn","0929-5313"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Odor supported place cell model and goal navigation in rodents"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS
  • 2012Book Chapter
    [["dc.bibliographiccitation.firstpage","721"],["dc.bibliographiccitation.lastpage","725"],["dc.contributor.author","Vogelgesang, Jens"],["dc.contributor.author","Cozzi, Alex"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.editor","von der Malsburg, Christoph"],["dc.contributor.editor","von Seelen, Werner"],["dc.contributor.editor","Vorbrüggen, Jan C."],["dc.contributor.editor","Sendhoff, Bernhard"],["dc.date.accessioned","2017-09-07T11:45:30Z"],["dc.date.available","2017-09-07T11:45:30Z"],["dc.date.issued","2012"],["dc.description.abstract","While optical flow has been often proposed for guiding a moving robot, its computational complexity has mostly prevented its actual use in real applications. We describe a restricted form of optical flow algorithm, which can be parallelized on chain-like neuronal structures, combining simplicity and speed. In addition, this algorithm makes use of predicted motion trajectories in order to remove noise from the input images."],["dc.identifier.doi","10.1007/3-540-61510-5_122"],["dc.identifier.gro","3151793"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8621"],["dc.language.iso","en"],["dc.notes.status","public"],["dc.notes.submitter","chake"],["dc.publisher","Springer"],["dc.publisher.place","Berlin, Heidelberg"],["dc.relation.crisseries","Lecture Notes in Computer Science"],["dc.relation.isbn","978-3-540-61510-1"],["dc.relation.ispartof","Artificial Neural Networks — ICANN 96"],["dc.relation.ispartofseries","Lecture Notes in Computer Science"],["dc.relation.issn","0302-9743"],["dc.title","A parallel algorithm for depth perception from radial optical flow fields"],["dc.type","book_chapter"],["dc.type.internalPublication","unknown"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2010Journal Article
    [["dc.bibliographiccitation.firstpage","379"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","International Journal of Humanoid Robotics"],["dc.bibliographiccitation.lastpage","405"],["dc.bibliographiccitation.volume","7"],["dc.contributor.author","Pugeault, Nicolas"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Krueger, Norbert"],["dc.date.accessioned","2018-11-07T08:39:45Z"],["dc.date.available","2018-11-07T08:39:45Z"],["dc.date.issued","2010"],["dc.description.abstract","We present a novel representation of visual information, based on local symbolic descriptors, that we call visual primitives. These primitives: (1) combine different visual modalities, (2) associate semantic to local scene information, and (3) reduce the bandwidth while increasing the predictability of the information exchanged across the system. This representation leads to the concept of early cognitive vision that we define as an intermediate level between dense, signal-based early vision and high-level cognitive vision. The framework's potential is demonstrated in several applications, in particular in the area of robotics and humanoid robotics, which are briefly outlined."],["dc.identifier.doi","10.1142/S0219843610002209"],["dc.identifier.isi","000284230200004"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/19072"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","World Scientific Publ Co Pte Ltd"],["dc.relation.issn","1793-6942"],["dc.relation.issn","0219-8436"],["dc.title","VISUAL PRIMITIVES: LOCAL, CONDENSED, SEMANTICALLY RICH VISUAL DESCRIPTORS AND THEIR APPLICATIONS IN ROBOTICS"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI WOS
  • 2004Journal Article
    [["dc.bibliographiccitation.firstpage","293"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Natural Computing"],["dc.bibliographiccitation.lastpage","321"],["dc.bibliographiccitation.volume","3"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Krüger, Norbert"],["dc.contributor.author","Pugeault, Nicolas"],["dc.contributor.author","Calow, Dirk"],["dc.contributor.author","Lappe, Markus"],["dc.contributor.author","Pauwels, Karl"],["dc.contributor.author","van Hulle, Marc M."],["dc.contributor.author","Tan, Sovira"],["dc.contributor.author","Johnston, Alan"],["dc.date.accessioned","2017-09-07T11:45:29Z"],["dc.date.available","2017-09-07T11:45:29Z"],["dc.date.issued","2004"],["dc.description.abstract","The goal of this review is to discuss different strategies employed by the visual system to limit data-flow and to focus data processing. These strategies can be hard-wired, like the eccentricity-dependent visual resolution or they can be dynamically changing like mechanisms of visual attention. We will ask to what degree such strategies are also useful in a computer vision context. Specifically we will discuss, how to adapt them to technical systems where the substrate for the computations is vastly different from that in the brain. It will become clear that most algorithmic principles, which are employed by natural visual systems, need to be reformulated to better fit to modern computer architectures. In addition, we will try to show that it is possible to employ multiple strategies in parallel to arrive at a flexible and robust computer vision system based on recurrent feedback loops and using information derived from the statistics of natural images."],["dc.identifier.doi","10.1023/b:naco.0000036817.38320.fe"],["dc.identifier.gro","3151779"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8605"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.notes.submitter","chake"],["dc.relation.issn","1567-7818"],["dc.title","Early Cognitive Vision: Using Gestalt-Laws for Task-Dependent, Active Image-Processing"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2012Journal Article
    [["dc.bibliographiccitation.firstpage","145"],["dc.bibliographiccitation.issue","1"],["dc.bibliographiccitation.journal","IEEE Transactions on Robotics"],["dc.bibliographiccitation.lastpage","157"],["dc.bibliographiccitation.volume","28"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Ning, KeJun"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T09:13:47Z"],["dc.date.available","2018-11-07T09:13:47Z"],["dc.date.issued","2012"],["dc.description.abstract","The generation of complex movement patterns, in particular, in cases where one needs to smoothly and accurately join trajectories in a dynamic way, is an important problem in robotics. This paper presents a novel joining method that is based on the modification of the original dynamic movement primitive formulation. The new method can reproduce the target trajectory with high accuracy regarding both the position and the velocity profile and produces smooth and natural transitions in position space, as well as in velocity space. The properties of the method are demonstrated by its application to simulated handwriting generation, which are also shown on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for joining of movement sequences, which has a high potential for all robotics applications where trajectory joining is required."],["dc.identifier.doi","10.1109/TRO.2011.2163863"],["dc.identifier.isi","000300188300012"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/27247"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Ieee-inst Electrical Electronics Engineers Inc"],["dc.relation.issn","1552-3098"],["dc.title","Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI WOS