Now showing 1 - 5 of 5
  • 2010Journal Article
    [["dc.bibliographiccitation.artnumber","134"],["dc.bibliographiccitation.journal","Frontiers in Computational Neuroscience"],["dc.bibliographiccitation.volume","4"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Tetzlaff, Christian"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:38:27Z"],["dc.date.available","2018-11-07T08:38:27Z"],["dc.date.issued","2010"],["dc.description.abstract","Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing-dependent plasticity (STDP), which depends on the interaction of few (two) signals, the question arises how these interactions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is - even in simple recurrent network structures - currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40% of these cases are obtained with a long-term potentiation-dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step toward a better understanding of the ongoing interactions between activity and plasticity in recurrent networks using STDP. The results suggest that stability of (sub-)networks should generically be present also in larger structures."],["dc.identifier.doi","10.3389/fncom.2010.00134"],["dc.identifier.isi","000288499400005"],["dc.identifier.pmid","21152348"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/18773"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Frontiers Res Found"],["dc.relation.issn","1662-5188"],["dc.title","Closed-form treatment of the interactions between neuronal activity and timing-dependent plasticity in networks of linear neurons"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS
  • 2008Journal Article
    [["dc.bibliographiccitation.firstpage","1173"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","Neural Computation"],["dc.bibliographiccitation.lastpage","1202"],["dc.bibliographiccitation.volume","21"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:31Z"],["dc.date.available","2017-09-07T11:45:31Z"],["dc.date.issued","2008"],["dc.description.abstract","In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning—correlation-based differential Hebbian learning and reward-based temporal difference learning—are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons."],["dc.identifier.doi","10.1162/neco.2008.04-08-750"],["dc.identifier.gro","3151803"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8632"],["dc.language.iso","en"],["dc.notes.status","public"],["dc.notes.submitter","chake"],["dc.relation.issn","0899-7667"],["dc.title","On the Asymptotic Equivalence Between Differential Hebbian and Temporal Difference Learning"],["dc.type","journal_article"],["dc.type.internalPublication","unknown"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]
    Details DOI
  • 2011Journal Article
    [["dc.bibliographiccitation.firstpage","560"],["dc.bibliographiccitation.issue","6"],["dc.bibliographiccitation.journal","Neural Networks"],["dc.bibliographiccitation.lastpage","567"],["dc.bibliographiccitation.volume","24"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","McCabe, Lynsey"],["dc.contributor.author","di Prodi, Paolo"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:54:08Z"],["dc.date.available","2018-11-07T08:54:08Z"],["dc.date.issued","2011"],["dc.description.abstract","It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here, we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spike-timing-dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spike-timing-dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia. (C) 2011 Elsevier Ltd. All rights reserved."],["dc.identifier.doi","10.1016/j.neunet.2011.03.004"],["dc.identifier.isi","000292409800007"],["dc.identifier.pmid","21477988"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/22600"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Pergamon-elsevier Science Ltd"],["dc.relation.issn","0893-6080"],["dc.title","How feedback inhibition shapes spike-timing-dependent plasticity and its implications for recent Schizophrenia models"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS
  • 2015Journal Article
    [["dc.bibliographiccitation.firstpage","666"],["dc.bibliographiccitation.journal","Information Sciences"],["dc.bibliographiccitation.lastpage","682"],["dc.bibliographiccitation.volume","294"],["dc.contributor.author","Ren, Guanjiao"],["dc.contributor.author","Chen, Weihai"],["dc.contributor.author","Dasgupta, Sakyasingha"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Manoonpong, Poramate"],["dc.date.accessioned","2018-11-07T10:01:00Z"],["dc.date.available","2018-11-07T10:01:00Z"],["dc.date.issued","2015"],["dc.description.abstract","An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robot's locomotion control as a central pattern generator (CPG), sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs' oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real six-legged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation. (C) 2014 Elsevier Inc. All rights reserved."],["dc.identifier.doi","10.1016/j.ins.2014.05.001"],["dc.identifier.isi","000346542800047"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/37925"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Inc"],["dc.relation.issn","1872-6291"],["dc.relation.issn","0020-0255"],["dc.title","Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI WOS
  • 2013Journal Article
    [["dc.bibliographiccitation.artnumber","1350015"],["dc.bibliographiccitation.issue","2-3"],["dc.bibliographiccitation.journal","Advances in Complex Systems"],["dc.bibliographiccitation.volume","16"],["dc.contributor.author","Manoonpong, Poramate"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Morimoto, Jun"],["dc.date.accessioned","2018-11-07T09:24:53Z"],["dc.date.available","2018-11-07T09:24:53Z"],["dc.date.issued","2013"],["dc.description.abstract","Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor-critic reinforcement learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control policy, i.e., it allows the controller to learn stabilizing the pole in the largest domain of initial conditions compared to the results obtained when using a single learning mechanism. This model can also find a successful control policy for goal-directed behavior, i.e., the robot can effectively learn to approach a given goal compared to its individual components. Thus, the study pursued here sharpens our understanding of how two different learning mechanisms can be combined and complement each other for solving complex tasks."],["dc.identifier.doi","10.1142/S021952591350015X"],["dc.identifier.isi","000321830800003"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/29939"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","World Scientific Publ Co Pte Ltd"],["dc.relation.issn","1793-6802"],["dc.relation.issn","0219-5259"],["dc.title","COMBINING CORRELATION-BASED AND REWARD-BASED LEARNING IN NEURAL CONTROL FOR POLICY IMPROVEMENT"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI WOS