Options
Wörgötter, Florentin Andreas
Loading...
Preferred name
Wörgötter, Florentin Andreas
Official Name
Wörgötter, Florentin Andreas
Alternative Name
Wörgötter, Florentin A.
Worgotter, Florentin
Wörgötter, Florentin
Wörgötter, F.
Woergoetter, Florentin Andreas
Worgotter, Florentin A.
Worgotter, F. A.
Wörgötter, F. A.
Woergoetter, Florentin A.
Woergoetter, F. A.
Woergoetter, Florentin
Woergoetter, F.
Worgotter, F.
Worgotter, Florentin Andreas
Main Affiliation
Now showing 1 - 10 of 35
2003Journal Article [["dc.bibliographiccitation.firstpage","865"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","Neural Computation"],["dc.bibliographiccitation.lastpage","884"],["dc.bibliographiccitation.volume","15"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Ferber, Christian von"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:26Z"],["dc.date.available","2017-09-07T11:45:26Z"],["dc.date.issued","2003"],["dc.description.abstract","In “Isotropic Sequence Order Learning” (pp. 831–864 in this issue), we introduced a novel algorithm for temporal sequence learning (ISO learning). Here, we embed this algorithm into a formal nonevaluating (teacher free) environment, which establishes a sensor-motor feedback. The system is initially guided by a fixed reflex reaction, which has the objective disadvantage that it can react only after a disturbance has occurred. ISO learning eliminates this disadvantage by replacing the reflex-loop reactions with earlier anticipatory actions. In this article, we analytically demonstrate that this process can be understood in terms of control theory, showing that the system learns the inverse controller of its own reflex. Thereby, this system is able to learn a simple form of feedforward motor control."],["dc.identifier.doi","10.1162/08997660360581930"],["dc.identifier.gro","3151766"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8591"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.notes.submitter","chake"],["dc.relation.issn","0899-7667"],["dc.title","ISO Learning Approximates a Solution to the Inverse-Controller Problem in an Unsupervised Behavioral Paradigm"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI1998Journal Article [["dc.bibliographiccitation.firstpage","329"],["dc.bibliographiccitation.issue","5"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","336"],["dc.bibliographiccitation.volume","78"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Cozzi, Alex"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:26Z"],["dc.date.available","2017-09-07T11:45:26Z"],["dc.date.issued","1998"],["dc.description.abstract","In a stereoscopic system, both eyes or cameras have a slightly different view. As a consequence, small variations between the projected images exist ('disparities') which are spatially evaluated in order to retrieve depth information (Sanger 1988; Fleet et al. 1991). A strong similarity exists between the analysis of visual disparities and the determination of the azimuth of a sound source (Wagner and Frost 1993). The direction of the sound is thereby determined from the temporal delay between the left and right ear signals (Konishi and Sullivan 1986). Similarly, here we transpose the spatially defined problem of disparity analysis into the temporal domain and utilize two resonators implemented in the form of causal (electronic) filters to determine the disparity as local temporal phase differences between the left and right filter responses. This approach permits real-time analysis and can be solved analytically for a step function contrast change, which is an important case in all real-world applications. The proposed theoretical framework for spatial depth retrieval directly utilizes a temporal algorithm borrowed from auditory signal analysis. Thus, the suggested similarity between the visual and the auditory system in the brain (Wagner and Frost 1993) finds its analogy here at the algorithmical level. We will compare the results from the temporal resonance algorithm with those obtained from several other techniques like cross-correlation or spatial phase-based disparity estimation showing that the novel algorithm achieves performances similar to the 'classical' approaches using much lower computational resources."],["dc.identifier.doi","10.1007/s004220050437"],["dc.identifier.gro","3151781"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8608"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.notes.submitter","chake"],["dc.relation.issn","0340-1200"],["dc.title","How to \"hear\" visual disparities: real-time stereoscopic spatial depth analysis using temporal resonance"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI2005Journal Article [["dc.bibliographiccitation.firstpage","3"],["dc.bibliographiccitation.issue","1-3"],["dc.bibliographiccitation.journal","Biosystems"],["dc.bibliographiccitation.lastpage","10"],["dc.bibliographiccitation.volume","79"],["dc.contributor.author","Saudargiene, Ausra"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:28Z"],["dc.date.available","2017-09-07T11:45:28Z"],["dc.date.issued","2005"],["dc.description.abstract","In spike-timing-dependent plasticity (STDP) the synapses are potentiated or depressed depending on the temporal order and temporal difference of the pre- and post-synaptic signals. We present a biophysical model of STDP which assumes that not only the timing, but also the shapes of these signals influence the synaptic modifications. The model is based on a Hebbian learning rule which correlates the NMDA synaptic conductance with the post-synaptic signal at synaptic location as the pre- and post-synaptic quantities. As compared to a previous paper [Saudargiene, A., Porr, B., Worgotter, F., 2004. How the shape of pre- and post-synaptic signals can influence stdp: a biophysical model. Neural Comp.], here we show that this rule reproduces the generic STDP weight change curve by using real neuronal input signals and combinations of more than two (pre- and post-synaptic) spikes. We demonstrate that the shape of the STDP curve strongly depends on the shape of the depolarising membrane potentials, which induces learning. As these potentials vary at different locations of the dendritic tree, model predicts that synaptic changes are location dependent. The model is extended to account for the patterns of more than two spikes of the pre- and post-synaptic cells. The results show that STDP weight change curve is also activity dependent."],["dc.identifier.doi","10.1016/j.biosystems.2004.09.010"],["dc.identifier.gro","3151778"],["dc.identifier.pmid","15649584"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8604"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.notes.submitter","chake"],["dc.relation.issn","0303-2647"],["dc.title","Synaptic modifications depend on synapse location and activity: a biophysical model of STDP"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI PMID PMC2007Conference Paper [["dc.bibliographiccitation.firstpage","294"],["dc.bibliographiccitation.issue","1-3"],["dc.bibliographiccitation.journal","Biosystems"],["dc.bibliographiccitation.lastpage","299"],["dc.bibliographiccitation.volume","89"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:02:35Z"],["dc.date.available","2018-11-07T11:02:35Z"],["dc.date.issued","2007"],["dc.description.abstract","Hebbian learning is the most prominent paradigm in correlation based learning: if pre- and postsynaptic activity coincides the weight of the synapse is strengthened. Hebbian learning however, is not stable because of an autocorrelation term which causes the weights to grow exponentially. The standard solution would be to compensate the autocorrelation term. However, in this work we present a heterosynaptic learning rule which does not have an autocorrelation term and therefore does not show the instability of Hebbian learning. Consequently our heterosynaptic learning is much more stable than the classical Hebbian learning. The performance of our learning rule is demonstrated in a model which is inspired by the limbic system where an agent has to retrieve food. (C) 2007 Published by Elsevier Ireland Ltd."],["dc.identifier.doi","10.1016/j.biosystems.2006.04.026"],["dc.identifier.isi","000247057900039"],["dc.identifier.pmid","17292537"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/51419"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Sci Ltd"],["dc.publisher.place","Oxford"],["dc.relation.conference","6th International Workshop on Neural Coding"],["dc.relation.eventlocation","Marburg, GERMANY"],["dc.relation.issn","0303-2647"],["dc.title","Fast heterosynaptic learning in a robot food retrieval task inspired by the limbic system"],["dc.type","conference_paper"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2007Journal Article [["dc.bibliographiccitation.firstpage","2694"],["dc.bibliographiccitation.issue","10"],["dc.bibliographiccitation.journal","Neural Computation"],["dc.bibliographiccitation.lastpage","2719"],["dc.bibliographiccitation.volume","19"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T10:58:20Z"],["dc.date.available","2018-11-07T10:58:20Z"],["dc.date.issued","2007"],["dc.description.abstract","It is a well-known fact that Hebbian learning is inherently unstable because of its self-amplifying terms: the more a synapse grows, the stronger the postsynaptic activity, and therefore the faster the synaptic growth. This unwanted weight growth is driven by the autocorrelation term of Hebbian learning where the same synapse drives its own growth. On the other hand, the cross-correlation term performs actual learning where different inputs are correlated with each other. Consequently, we would like to minimize the autocorrelation and maximize the cross-correlation. Here we show that we can achieve this with a third factor that switches on learning when the autocorrelation is minimal or zero and the cross-correlation is maximal. The biological counterpart of such a third factor is a neuromodulator that switches on learning at a certain moment in time. We show in a behavioral experiment that our three-factor learning clearly outperforms classical Hebbian learning."],["dc.identifier.doi","10.1162/neco.2007.19.10.2694"],["dc.identifier.isi","000249243800006"],["dc.identifier.pmid","17716008"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/50458"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","M I T Press"],["dc.relation.issn","0899-7667"],["dc.title","Learning with \"relevance\": Using a third factor to stabilize Hebbian learning"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2010Journal Article [["dc.bibliographiccitation.firstpage","255"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","271"],["dc.bibliographiccitation.volume","103"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:38:17Z"],["dc.date.available","2018-11-07T08:38:17Z"],["dc.date.issued","2010"],["dc.description.abstract","Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 1950s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning (STDP). In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy, input/output ratio and entropy measures and investigating their development during learning. This way we can show that within well-specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties."],["dc.identifier.doi","10.1007/s00422-010-0396-4"],["dc.identifier.isi","000281667700001"],["dc.identifier.pmid","20556620"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/5158"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/18732"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0340-1200"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Behavioral analysis of differential hebbian learning in closed-loop systems"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2008Journal Article [["dc.bibliographiccitation.firstpage","259"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","272"],["dc.bibliographiccitation.volume","98"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:17:22Z"],["dc.date.available","2018-11-07T11:17:22Z"],["dc.date.issued","2008"],["dc.description.abstract","A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the delta-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications."],["dc.identifier.doi","10.1007/s00422-007-0209-6"],["dc.identifier.isi","000253623800006"],["dc.identifier.pmid","18196266"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?goescholar/3068"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/54790"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0340-1200"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2002Journal Article [["dc.bibliographiccitation.firstpage","585"],["dc.bibliographiccitation.journal","Neurocomputing"],["dc.bibliographiccitation.lastpage","590"],["dc.bibliographiccitation.volume","44-46"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:23Z"],["dc.date.available","2017-09-07T11:45:23Z"],["dc.date.issued","2002"],["dc.description.abstract","A novel approach for learning of temporally extended, continuous signals is developed within the framework of rate-coded neurons. A new temporal Hebb-like learning rule is devised which utilises the predictive capabilities of bandpass-filtered signals by using the derivative of the output to modify the weights. The initial development of the weights is calculated analytically by applying signal theory and simulation results that are shown to demonstrate the performance of this approach. In addition, we show that only few units suffice to process multiple inputs with long temporal delays."],["dc.identifier.doi","10.1016/s0925-2312(02)00444-7"],["dc.identifier.gro","3151763"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8588"],["dc.language.iso","en"],["dc.notes.status","zu prüfen"],["dc.relation.issn","0925-2312"],["dc.title","Predictive learning in rate-coded neuronal networks: a theoretical approach towards classical conditioning"],["dc.type","journal_article"],["dc.type.internalPublication","unknown"],["dc.type.peerReviewed","yes"],["dspace.entity.type","Publication"]]Details DOI2004Journal Article [["dc.bibliographiccitation.firstpage","595"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Neural Computation"],["dc.bibliographiccitation.lastpage","625"],["dc.bibliographiccitation.volume","16"],["dc.contributor.author","Saudargiene, Ausra"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:26Z"],["dc.date.available","2017-09-07T11:45:26Z"],["dc.date.issued","2004"],["dc.description.abstract","Spike-timing-dependent plasticity (STDP) is described by long-term potentiation (LTP), when a presynaptic event precedes a postsynaptic event, and by long-term depression (LTD), when the temporal order is reversed. In this article, we present a biophysical model of STDP based on a differential Hebbian learning rule (ISO learning). This rule correlates presynaptically the NMDA channel conductance with the derivative of the membrane potential at the synapse as the postsynaptic signal. The model is able to reproduce the generic STDP weight change characteristic. We find that (1) The actual shape of the weight change curve strongly depends on the NMDA channel characteristics and on the shape of the membrane potential at the synapse. (2) The typical antisymmetrical STDP curve (LTD and LTP) can become similar to a standard Hebbian characteristic (LTP only) without having to change the learning rule. This occurs if the membrane depolarization has a shallow onset and is long lasting. (3) It is known that the membrane potential varies along the dendrite as a result of the active or passive backpropagation of somatic spikes or because of local dendritic processes. As a consequence, our model predicts that learning properties will be different at different locations on the dendritic tree. In conclusion, such site-specific synaptic plasticity would provide a neuron with powerful learning capabilities."],["dc.identifier.doi","10.1162/089976604772744929"],["dc.identifier.gro","3151770"],["dc.identifier.pmid","15006093"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8595"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.notes.submitter","chake"],["dc.relation.issn","0899-7667"],["dc.title","How the Shape of Pre- and Postsynaptic Signals Can Influence STDP: A Biophysical Model"],["dc.type","journal_article"],["dc.type.internalPublication","no"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI PMID PMC2008Journal Article [["dc.bibliographiccitation.firstpage","1173"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","Neural Computation"],["dc.bibliographiccitation.lastpage","1202"],["dc.bibliographiccitation.volume","21"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:31Z"],["dc.date.available","2017-09-07T11:45:31Z"],["dc.date.issued","2008"],["dc.description.abstract","In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning—correlation-based differential Hebbian learning and reward-based temporal difference learning—are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons."],["dc.identifier.doi","10.1162/neco.2008.04-08-750"],["dc.identifier.gro","3151803"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8632"],["dc.language.iso","en"],["dc.notes.status","public"],["dc.notes.submitter","chake"],["dc.relation.issn","0899-7667"],["dc.title","On the Asymptotic Equivalence Between Differential Hebbian and Temporal Difference Learning"],["dc.type","journal_article"],["dc.type.internalPublication","unknown"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI