Now showing 1 - 4 of 4
  • 2015Journal Article
    [["dc.bibliographiccitation.artnumber","10"],["dc.bibliographiccitation.journal","Frontiers in Neurorobotics"],["dc.bibliographiccitation.volume","9"],["dc.contributor.author","Dasgupta, Sakyasingha"],["dc.contributor.author","Goldschmidt, Dennis"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Manoonpong, Poramate"],["dc.date.accessioned","2018-11-07T09:51:28Z"],["dc.date.available","2018-11-07T09:51:28Z"],["dc.date.issued","2015"],["dc.description.abstract","Walking animals, like stick insects, cockroaches or ants, demonstrate a fascinating range of locomotive abilities and complex behaviors. The locomotive behaviors can consist of a variety of walking patterns along with adaptation that allow the animals to deal with changes in environmental conditions, like uneven terrains, gaps, obstacles etc. Biological study has revealed that such complex behaviors are a result of a combination of biomechanics and neural mechanism thus representing the true nature of embodied interactions. While the biomechanics helps maintain flexibility and sustain a variety of movements, the neural mechanisms generate movements while making appropriate predictions crucial for achieving adaptation. Such predictions or planning ahead can be achieved by way of internal models that are grounded in the overall behavior of the animal. Inspired by these findings, we present here, an artificial bio-inspired walking system which effectively combines biomechanics (in terms of the body and leg structures) with the underlying neural mechanisms. The neural mechanisms consist of (1) central pattern generator based control for generating basic rhythmic patterns and coordinated movements, (2) distributed (at each leg) recurrent neural network based adaptive forward models with efference copies as internal models for sensory predictions and instantaneous state estimations, and (3) searching and elevation control for adapting the movement of an individual leg to deal with different environmental conditions. Using simulations we show that this bio-inspired approach with adaptive internal models allows the walking robot to perform complex locomotive behaviors as observed in insects, including walking on undulated terrains, crossing large gaps, leg damage adaptations, as well as climbing over high obstacles. Furthermore, we demonstrate that the newly developed recurrent network based approach to online forward models outperforms the adaptive neuron forward models, which have hitherto been the state of the art, to model a subset of similar walking behaviors in walking robots."],["dc.identifier.doi","10.3389/fnbot.2015.00010"],["dc.identifier.isi","000370403400001"],["dc.identifier.pmid","26441629"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/13198"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/35924"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Frontiers Media Sa"],["dc.relation.issn","1662-5218"],["dc.rights","CC BY 4.0"],["dc.rights.uri","https://creativecommons.org/licenses/by/4.0"],["dc.title","Distributed recurrent neural forward models with synaptic adaptation and CPG-based control for complex behaviors of walking robots"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS
  • 2017Journal Article
    [["dc.bibliographiccitation.artnumber","20"],["dc.bibliographiccitation.journal","Frontiers in neurorobotics"],["dc.bibliographiccitation.volume","11"],["dc.contributor.author","Goldschmidt, Dennis"],["dc.contributor.author","Manoonpong, Poramate"],["dc.contributor.author","Dasgupta, Sakyasingha"],["dc.date.accessioned","2019-07-09T11:44:54Z"],["dc.date.available","2019-07-09T11:44:54Z"],["dc.date.issued","2017"],["dc.description.abstract","Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates."],["dc.identifier.doi","10.3389/fnbot.2017.00020"],["dc.identifier.pmid","28446872"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/14958"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/59123"],["dc.language.iso","en"],["dc.notes.intern","Merged from goescholar"],["dc.publisher","Frontiers Media S.A."],["dc.relation","info:eu-repo/grantAgreement/EC/H2020/732266/EU//Plan4Act"],["dc.relation.eissn","1662-5218"],["dc.relation.issn","1662-5218"],["dc.rights","CC BY 4.0"],["dc.rights.uri","https://creativecommons.org/licenses/by/4.0"],["dc.subject.ddc","006"],["dc.subject.ddc","573"],["dc.subject.ddc","612"],["dc.title","A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents."],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC
  • 2014Journal Article
    [["dc.bibliographiccitation.artnumber","UNSP 126"],["dc.bibliographiccitation.journal","Frontiers in Neural Circuits"],["dc.bibliographiccitation.volume","8"],["dc.contributor.author","Dasgupta, Sakyasingha"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Manoonpong, Poramate"],["dc.date.accessioned","2018-11-07T09:33:23Z"],["dc.date.available","2018-11-07T09:33:23Z"],["dc.date.issued","2014"],["dc.description.abstract","Goal-directed decision making in biological systems is broadly based on associations between conditional and unconditional stimuli. This can be further classified as classical conditioning (correlation-based learning) and operant conditioning (reward-based learning). A number of computational and experimental studies have well established the role of the basal ganglia in reward-based learning, where as the cerebellum plays an important role in developing specific conditioned responses. Although viewed as distinct learning systems, recent animal experiments point toward their complementary role in behavioral learning, and also show the existence of substantial two-way communication between these two brain structures. Based on this notion of co-operative learning, in this paper we hypothesize that the basal ganglia and cerebellar learning systems work in parallel and interact with each other. We envision that such an interaction is influenced by reward modulated heterosynaptic plasticity (RMHP) rule at the thalamus, guiding the overall goal directed behavior. Using a recurrent neural network actor-critic model of the basal ganglia and a feed-forward correlation-based learning model of the cerebellum, we demonstrate that the RMHP rule can effectively balance the outcomes of the two learning systems. This is tested using simulated environments of increasing complexity with a four-wheeled robot in a foraging task in both static and dynamic configurations. Although modeled with a simplified level of biological abstraction, we clearly demonstrate that such a RMHP induced combinatorial learning mechanism, leads to stabler and faster learning of goal-directed behaviors, in comparison to the individual systems. Thus, in this paper we provide a computational model for adaptive combination of the basal ganglia and cerebellum learning systems by way of neuromodulated plasticity for goal-directed decision making in biological and bio-mimetic organisms."],["dc.description.sponsorship","Open-Access-Publikationsfonds 2014"],["dc.identifier.doi","10.3389/fncir.2014.00126"],["dc.identifier.isi","000344065000001"],["dc.identifier.pmid","25389391"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/11028"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/31952"],["dc.notes.intern","Merged from goescholar"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Frontiers Research Foundation"],["dc.relation.issn","1662-5110"],["dc.relation.orgunit","Fakultät für Physik"],["dc.rights","CC BY 4.0"],["dc.rights.uri","https://creativecommons.org/licenses/by/4.0"],["dc.title","Neurornodulatory adaptive combination of correlation-based learning in cerebellum and reward-based learning in basal ganglia for goal-directed behavior control"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS
  • 2015Journal Article
    [["dc.bibliographiccitation.firstpage","666"],["dc.bibliographiccitation.journal","Information Sciences"],["dc.bibliographiccitation.lastpage","682"],["dc.bibliographiccitation.volume","294"],["dc.contributor.author","Ren, Guanjiao"],["dc.contributor.author","Chen, Weihai"],["dc.contributor.author","Dasgupta, Sakyasingha"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Manoonpong, Poramate"],["dc.date.accessioned","2018-11-07T10:01:00Z"],["dc.date.available","2018-11-07T10:01:00Z"],["dc.date.issued","2015"],["dc.description.abstract","An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robot's locomotion control as a central pattern generator (CPG), sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs' oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real six-legged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation. (C) 2014 Elsevier Inc. All rights reserved."],["dc.identifier.doi","10.1016/j.ins.2014.05.001"],["dc.identifier.isi","000346542800047"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/37925"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Inc"],["dc.relation.issn","1872-6291"],["dc.relation.issn","0020-0255"],["dc.title","Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI WOS