Now showing 1 - 7 of 7
  • 2010Journal Article
    [["dc.bibliographiccitation.firstpage","255"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","271"],["dc.bibliographiccitation.volume","103"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:38:17Z"],["dc.date.available","2018-11-07T08:38:17Z"],["dc.date.issued","2010"],["dc.description.abstract","Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 1950s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning (STDP). In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy, input/output ratio and entropy measures and investigating their development during learning. This way we can show that within well-specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties."],["dc.identifier.doi","10.1007/s00422-010-0396-4"],["dc.identifier.isi","000281667700001"],["dc.identifier.pmid","20556620"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/5158"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/18732"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0340-1200"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Behavioral analysis of differential hebbian learning in closed-loop systems"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS
  • 2010Journal Article
    [["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Journal of Computational Neuroscience"],["dc.bibliographiccitation.volume","28"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Ainge, James A."],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Dudchenko, Paul A."],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:42:48Z"],["dc.date.available","2018-11-07T08:42:48Z"],["dc.date.issued","2010"],["dc.format.extent","619"],["dc.identifier.doi","10.1007/s10827-010-0217-8"],["dc.identifier.isi","000278406500019"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/6800"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/19788"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0929-5313"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Path-finding in real and simulated rats: assessing the influence of path characteristics on navigation learning (vol 25, pg 562, 2008)"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI WOS
  • 2007Conference Paper
    [["dc.bibliographiccitation.firstpage","2005"],["dc.bibliographiccitation.issue","10-12"],["dc.bibliographiccitation.journal","Neurocomputing"],["dc.bibliographiccitation.lastpage","2008"],["dc.bibliographiccitation.volume","70"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:02:06Z"],["dc.date.available","2018-11-07T11:02:06Z"],["dc.date.issued","2007"],["dc.description.abstract","Donald Hebb postulated that if neurons fire together they wire together. However, Hebbian learning is inherently unstable because synaptic weights will self-amplify themselves: the more a synapse drives a postsynaptic cell the more the synaptic weight will grow. We present a new biologically realistic way of showing how to stabilise synaptic weights by introducing a third factor which switches learning on or off so that self-amplification is minimised. The third factor can be identified by the activity of dopaminergic neurons in ventral tegmental area which leads to a new interpretation of the dopamine signal which goes beyond the classical prediction error hypothesis. (c) 2006 Elsevier B.V. All rights reserved."],["dc.identifier.doi","10.1016/j.neucom.2006.10.137"],["dc.identifier.isi","000247215300077"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/51298"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Bv"],["dc.publisher.place","Amsterdam"],["dc.relation.conference","15th Annual Computational Neuroscience Meeting"],["dc.relation.eventlocation","Edinburgh, SCOTLAND"],["dc.relation.issn","0925-2312"],["dc.title","Improved stability and convergence with three factor learning"],["dc.type","conference_paper"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI WOS
  • 2007Conference Paper
    [["dc.bibliographiccitation.firstpage","2046"],["dc.bibliographiccitation.issue","10-12"],["dc.bibliographiccitation.journal","Neurocomputing"],["dc.bibliographiccitation.lastpage","2049"],["dc.bibliographiccitation.volume","70"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:02:06Z"],["dc.date.available","2018-11-07T11:02:06Z"],["dc.date.issued","2007"],["dc.description.abstract","Recently it has been pointed out that in simple animals like flies a motor neuron can have a visual receptive field [H.G. Krapp, S.J. Huston, Encoding self-motion: From visual receptive fields to motor neuron response maps, in: H. Zimmermann, K. Kriegistein (Eds.), Proceedings of the sixth Meeting of the German Neuroscience Society/30th Gottingen Neurobiology Conference 2005, Gottingen, 2005, p. S16-3] [4]. Such receptive fields directly generate behaviour which, through closing the perception-action loop, will feed back to the sensors again. In more complex animals an increasingly complex hierarchy of visual receptive fields exists from early to higher visual areas, where visual input becomes more and more indirect. Here we will show that it is possible to develop receptive fields in simple behavioural systems by ways of a temporal sequence learning algorithm. The main goal is to demonstrate that learning generates stable behaviour and that the resulting receptive fields are also stable as soon as the newly learnt behaviour is successful. (c) 2006 Elsevier B.V. All rights reserved."],["dc.identifier.doi","10.1016/j.neucom.2006.10.132"],["dc.identifier.isi","000247215300085"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/51299"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Bv"],["dc.publisher.place","Amsterdam"],["dc.relation.conference","15th Annual Computational Neuroscience Meeting"],["dc.relation.eventlocation","Edinburgh, SCOTLAND"],["dc.relation.issn","0925-2312"],["dc.title","Development of receptive fields in a closed-loop behavioural system"],["dc.type","conference_paper"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI WOS
  • 2007Journal Article
    [["dc.bibliographiccitation.firstpage","363"],["dc.bibliographiccitation.issue","5-6"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","378"],["dc.bibliographiccitation.volume","97"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T10:46:32Z"],["dc.date.available","2018-11-07T10:46:32Z"],["dc.date.issued","2007"],["dc.description.abstract","Objective Living creatures can learn or improve their behaviour by temporally correlating sensor cues where near-senses (e.g., touch, taste) follow after far-senses (vision, smell). Such type of learning is related to classical and/or operant conditioning. Algorithmically all these approaches are very simple and consist of single learning unit. The cut-rent study is trying to solve this problem focusing Oil chained learning architectures in a simple closed-loop behavioural context. Methods We applied temporal sequence learning (Porr B and Worgotter F 2006) in a closed-loop behavioural system where a driving robot learns to follow a line. Here for the first time we introduced two types of chained learning architectures named linear chain and honeycomb chain. We analyzed such architectures in an open and closed-loop context and compared them to the simple learning unit. Conclusions By implementing two types of simple chained learning architectures we have demonstrated that stable behaviour can also be obtained in such architectures. Results also Suggest that chained architectures can be employed and better behavioural performance can be obtained compared to simple architectures in cases where we have sparse inputs in time and learning normally fails because of weak correlations."],["dc.identifier.doi","10.1007/s00422-007-0176-y"],["dc.identifier.isi","000252041700004"],["dc.identifier.pmid","17912544"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/47768"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0340-1200"],["dc.title","Chained learning architectures in a simple closed-loop behavioural context"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS
  • 2007Journal Article
    [["dc.bibliographiccitation.artnumber","e134"],["dc.bibliographiccitation.firstpage","1305"],["dc.bibliographiccitation.issue","7"],["dc.bibliographiccitation.journal","PLoS Computational Biology"],["dc.bibliographiccitation.lastpage","1320"],["dc.bibliographiccitation.volume","3"],["dc.contributor.author","Manoonpong, Poramate"],["dc.contributor.author","Geng, Tao"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:01:18Z"],["dc.date.available","2018-11-07T11:01:18Z"],["dc.date.issued","2007"],["dc.description.abstract","Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e. g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control ( e. g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori-motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks."],["dc.identifier.doi","10.1371/journal.pcbi.0030134"],["dc.identifier.isi","000249106000013"],["dc.identifier.pmid","17630828"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/8442"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/51119"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Public Library Science"],["dc.relation.issn","1553-7358"],["dc.relation.issn","1553-734X"],["dc.rights","CC BY 2.5"],["dc.rights.uri","https://creativecommons.org/licenses/by/2.5"],["dc.title","Adaptive, fast walking in a biped robot under neuronal control and learning"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS
  • 2008Journal Article
    [["dc.bibliographiccitation.firstpage","562"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Journal of Computational Neuroscience"],["dc.bibliographiccitation.lastpage","582"],["dc.bibliographiccitation.volume","25"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Ainge, James A."],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Dudchenko, Paul A."],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:08:37Z"],["dc.date.available","2018-11-07T11:08:37Z"],["dc.date.issued","2008"],["dc.description.abstract","A large body of experimental evidence suggests that the hippocampal place field system is involved in reward based navigation learning in rodents. Reinforcement learning (RL) mechanisms have been used to model this, associating the state space in an RL-algorithm to the place-field map in a rat. The convergence properties of RL-algorithms are affected by the exploration patterns of the learner. Therefore, we first analyzed the path characteristics of freely exploring rats in a test arena. We found that straight path segments with mean length 23 cm up to a maximal length of 80 cm take up a significant proportion of the total paths. Thus, rat paths are biased as compared to random exploration. Next we designed a RL system that reproduces these specific path characteristics. Our model arena is covered by overlapping, probabilistically firing place fields (PF) of realistic size and coverage. Because convergence of RL-algorithms is also influenced by the state space characteristics, different PF-sizes and densities, leading to a different degree of overlap, were also investigated. The model rat learns finding a reward opposite to its starting point. We observed that the combination of biased straight exploration, overlapping coverage and probabilistic firing will strongly impair the convergence of learning. When the degree of randomness in the exploration is increased, convergence improves, but the distribution of straight path segments becomes unrealistic and paths become 'wiggly'. To mend this situation without affecting the path characteristic two additional mechanisms are implemented: A gradual drop of the learned weights (weight decay) and path length limitation, which prevents learning if the reward is not found after some expected time. Both mechanisms limit the memory of the system and thereby counteract effects of getting trapped on a wrong path. When using these strategies individually divergent cases get substantially reduced and for some parameter settings no divergence was found anymore at all. Using weight decay and path length limitation at the same time, convergence is not much improved but instead time to convergence increases as the memory limiting effect is getting too strong. The degree of improvement relies also on the size and degree of overlap (coverage density) in the place field system. The used combination of these two parameters leads to a trade-off between convergence and speed to convergence. Thus, this study suggests that the role of the PF-system in navigation learning cannot be considered independently from the animals' exploration pattern."],["dc.description.sponsorship","Biotechnology and Biological Sciences Research Council [BB/C516079/1]"],["dc.identifier.doi","10.1007/s10827-008-0094-6"],["dc.identifier.isi","000259438100009"],["dc.identifier.pmid","18446432"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?goescholar/3066"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/52824"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","1573-6873"],["dc.relation.issn","0929-5313"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Path-finding in real and simulated rats: assessing the influence of path characteristics on navigation learning"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]
    Details DOI PMID PMC WOS