Options
Kulvicius, Tomas
Loading...
Preferred name
Kulvicius, Tomas
Official Name
Kulvicius, Tomas
Alternative Name
Kulvicius, T.
Main Affiliation
Now showing 1 - 3 of 3
2007Conference Paper [["dc.bibliographiccitation.firstpage","2005"],["dc.bibliographiccitation.issue","10-12"],["dc.bibliographiccitation.journal","Neurocomputing"],["dc.bibliographiccitation.lastpage","2008"],["dc.bibliographiccitation.volume","70"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:02:06Z"],["dc.date.available","2018-11-07T11:02:06Z"],["dc.date.issued","2007"],["dc.description.abstract","Donald Hebb postulated that if neurons fire together they wire together. However, Hebbian learning is inherently unstable because synaptic weights will self-amplify themselves: the more a synapse drives a postsynaptic cell the more the synaptic weight will grow. We present a new biologically realistic way of showing how to stabilise synaptic weights by introducing a third factor which switches learning on or off so that self-amplification is minimised. The third factor can be identified by the activity of dopaminergic neurons in ventral tegmental area which leads to a new interpretation of the dopamine signal which goes beyond the classical prediction error hypothesis. (c) 2006 Elsevier B.V. All rights reserved."],["dc.identifier.doi","10.1016/j.neucom.2006.10.137"],["dc.identifier.isi","000247215300077"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/51298"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Bv"],["dc.publisher.place","Amsterdam"],["dc.relation.conference","15th Annual Computational Neuroscience Meeting"],["dc.relation.eventlocation","Edinburgh, SCOTLAND"],["dc.relation.issn","0925-2312"],["dc.title","Improved stability and convergence with three factor learning"],["dc.type","conference_paper"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI WOS2007Conference Paper [["dc.bibliographiccitation.firstpage","2046"],["dc.bibliographiccitation.issue","10-12"],["dc.bibliographiccitation.journal","Neurocomputing"],["dc.bibliographiccitation.lastpage","2049"],["dc.bibliographiccitation.volume","70"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:02:06Z"],["dc.date.available","2018-11-07T11:02:06Z"],["dc.date.issued","2007"],["dc.description.abstract","Recently it has been pointed out that in simple animals like flies a motor neuron can have a visual receptive field [H.G. Krapp, S.J. Huston, Encoding self-motion: From visual receptive fields to motor neuron response maps, in: H. Zimmermann, K. Kriegistein (Eds.), Proceedings of the sixth Meeting of the German Neuroscience Society/30th Gottingen Neurobiology Conference 2005, Gottingen, 2005, p. S16-3] [4]. Such receptive fields directly generate behaviour which, through closing the perception-action loop, will feed back to the sensors again. In more complex animals an increasingly complex hierarchy of visual receptive fields exists from early to higher visual areas, where visual input becomes more and more indirect. Here we will show that it is possible to develop receptive fields in simple behavioural systems by ways of a temporal sequence learning algorithm. The main goal is to demonstrate that learning generates stable behaviour and that the resulting receptive fields are also stable as soon as the newly learnt behaviour is successful. (c) 2006 Elsevier B.V. All rights reserved."],["dc.identifier.doi","10.1016/j.neucom.2006.10.132"],["dc.identifier.isi","000247215300085"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/51299"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Bv"],["dc.publisher.place","Amsterdam"],["dc.relation.conference","15th Annual Computational Neuroscience Meeting"],["dc.relation.eventlocation","Edinburgh, SCOTLAND"],["dc.relation.issn","0925-2312"],["dc.title","Development of receptive fields in a closed-loop behavioural system"],["dc.type","conference_paper"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI WOS2007Journal Article [["dc.bibliographiccitation.firstpage","363"],["dc.bibliographiccitation.issue","5-6"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","378"],["dc.bibliographiccitation.volume","97"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T10:46:32Z"],["dc.date.available","2018-11-07T10:46:32Z"],["dc.date.issued","2007"],["dc.description.abstract","Objective Living creatures can learn or improve their behaviour by temporally correlating sensor cues where near-senses (e.g., touch, taste) follow after far-senses (vision, smell). Such type of learning is related to classical and/or operant conditioning. Algorithmically all these approaches are very simple and consist of single learning unit. The cut-rent study is trying to solve this problem focusing Oil chained learning architectures in a simple closed-loop behavioural context. Methods We applied temporal sequence learning (Porr B and Worgotter F 2006) in a closed-loop behavioural system where a driving robot learns to follow a line. Here for the first time we introduced two types of chained learning architectures named linear chain and honeycomb chain. We analyzed such architectures in an open and closed-loop context and compared them to the simple learning unit. Conclusions By implementing two types of simple chained learning architectures we have demonstrated that stable behaviour can also be obtained in such architectures. Results also Suggest that chained architectures can be employed and better behavioural performance can be obtained compared to simple architectures in cases where we have sparse inputs in time and learning normally fails because of weak correlations."],["dc.identifier.doi","10.1007/s00422-007-0176-y"],["dc.identifier.isi","000252041700004"],["dc.identifier.pmid","17912544"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/47768"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0340-1200"],["dc.title","Chained learning architectures in a simple closed-loop behavioural context"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS
3 results