Options
Wörgötter, Florentin Andreas
Loading...
Preferred name
Wörgötter, Florentin Andreas
Official Name
Wörgötter, Florentin Andreas
Alternative Name
Wörgötter, Florentin A.
Worgotter, Florentin
Wörgötter, Florentin
Wörgötter, F.
Woergoetter, Florentin Andreas
Worgotter, Florentin A.
Worgotter, F. A.
Wörgötter, F. A.
Woergoetter, Florentin A.
Woergoetter, F. A.
Woergoetter, Florentin
Woergoetter, F.
Worgotter, F.
Worgotter, Florentin Andreas
Main Affiliation
Now showing 1 - 10 of 13
2010Journal Article [["dc.bibliographiccitation.artnumber","134"],["dc.bibliographiccitation.journal","Frontiers in Computational Neuroscience"],["dc.bibliographiccitation.volume","4"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Tetzlaff, Christian"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:38:27Z"],["dc.date.available","2018-11-07T08:38:27Z"],["dc.date.issued","2010"],["dc.description.abstract","Network activity and network connectivity mutually influence each other. Especially for fast processes, like spike-timing-dependent plasticity (STDP), which depends on the interaction of few (two) signals, the question arises how these interactions are continuously altering the behavior and structure of the network. To address this question a time-continuous treatment of plasticity is required. However, this is - even in simple recurrent network structures - currently not possible. Thus, here we develop for a linear differential Hebbian learning system a method by which we can analytically investigate the dynamics and stability of the connections in recurrent networks. We use noisy periodic external input signals, which through the recurrent connections lead to complex actual ongoing inputs and observe that large stable ranges emerge in these networks without boundaries or weight-normalization. Somewhat counter-intuitively, we find that about 40% of these cases are obtained with a long-term potentiation-dominated STDP curve. Noise can reduce stability in some cases, but generally this does not occur. Instead stable domains are often enlarged. This study is a first step toward a better understanding of the ongoing interactions between activity and plasticity in recurrent networks using STDP. The results suggest that stability of (sub-)networks should generically be present also in larger structures."],["dc.identifier.doi","10.3389/fncom.2010.00134"],["dc.identifier.isi","000288499400005"],["dc.identifier.pmid","21152348"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/18773"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Frontiers Res Found"],["dc.relation.issn","1662-5188"],["dc.title","Closed-form treatment of the interactions between neuronal activity and timing-dependent plasticity in networks of linear neurons"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2012Journal Article [["dc.bibliographiccitation.artnumber","UNSP 36"],["dc.bibliographiccitation.journal","Frontiers in Computational Neuroscience"],["dc.bibliographiccitation.volume","6"],["dc.contributor.author","Tetzlaff, Christian"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Timme, Marc"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T09:09:14Z"],["dc.date.available","2018-11-07T09:09:14Z"],["dc.date.issued","2012"],["dc.description.abstract","Conventional synaptic plasticity in combination with synaptic scaling is a biologically plausible plasticity rule that guides the development of synapses toward stability. Here we analyze the development of synaptic connections and the resulting activity patterns in different feed-forward and recurrent neural networks, with plasticity and scaling. We show under which constraints an external input given to a feed-forward network forms an input trace similar to a cell assembly (Hebb, 1949) by enhancing synaptic weights to larger stable values as compared to the rest of the network. For instance, a weak input creates a less strong representation in the network than a strong input which produces a trace along large parts of the network. These processes are strongly influenced by the underlying connectivity. For example, when embedding recurrent structures (excitatory rings, etc.) into a feed-forward network, the input trace is extended into more distant layers, while inhibition shortens it. These findings provide a better understanding of the dynamics of generic network structures where plasticity is combined with scaling. This makes it also possible to use this rule for constructing an artificial network with certain desired storage properties."],["dc.identifier.doi","10.3389/fncom.2012.00036"],["dc.identifier.fs","597278"],["dc.identifier.isi","000305330100001"],["dc.identifier.pmid","22719724"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/7780"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/26210"],["dc.notes.intern","Merged from goescholar"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Frontiers Res Found"],["dc.relation","info:eu-repo/grantAgreement/EC/FP7/270273/EU//Xperience"],["dc.relation.issn","1662-5188"],["dc.relation.orgunit","Fakultät für Physik"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Analysis of synaptic scaling in combination with Hebbian plasticity in several simple networks"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2010Journal Article [["dc.bibliographiccitation.firstpage","255"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","271"],["dc.bibliographiccitation.volume","103"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:38:17Z"],["dc.date.available","2018-11-07T08:38:17Z"],["dc.date.issued","2010"],["dc.description.abstract","Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 1950s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning (STDP). In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy, input/output ratio and entropy measures and investigating their development during learning. This way we can show that within well-specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties."],["dc.identifier.doi","10.1007/s00422-010-0396-4"],["dc.identifier.isi","000281667700001"],["dc.identifier.pmid","20556620"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/5158"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/18732"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0340-1200"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Behavioral analysis of differential hebbian learning in closed-loop systems"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2008Journal Article [["dc.bibliographiccitation.firstpage","259"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","272"],["dc.bibliographiccitation.volume","98"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:17:22Z"],["dc.date.available","2018-11-07T11:17:22Z"],["dc.date.issued","2008"],["dc.description.abstract","A confusingly wide variety of temporally asymmetric learning rules exists related to reinforcement learning and/or to spike-timing dependent plasticity, many of which look exceedingly similar, while displaying strongly different behavior. These rules often find their use in control tasks, for example in robotics and for this rigorous convergence and numerical stability is required. The goal of this article is to review these rules and compare them to provide a better overview over their different properties. Two main classes will be discussed: temporal difference (TD) rules and correlation based (differential hebbian) rules and some transition cases. In general we will focus on neuronal implementations with changeable synaptic weights and a time-continuous representation of activity. In a machine learning (non-neuronal) context, for TD-learning a solid mathematical theory has existed since several years. This can partly be transfered to a neuronal framework, too. On the other hand, only now a more complete theory has also emerged for differential Hebb rules. In general rules differ by their convergence conditions and their numerical stability, which can lead to very undesirable behavior, when wanting to apply them. For TD, convergence can be enforced with a certain output condition assuring that the delta-error drops on average to zero (output control). Correlation based rules, on the other hand, converge when one input drops to zero (input control). Temporally asymmetric learning rules treat situations where incoming stimuli follow each other in time. Thus, it is necessary to remember the first stimulus to be able to relate it to the later occurring second one. To this end different types of so-called eligibility traces are being used by these two different types of rules. This aspect leads again to different properties of TD and differential Hebbian learning as discussed here. Thus, this paper, while also presenting several novel mathematical results, is mainly meant to provide a road map through the different neuronally emulated temporal asymmetrical learning rules and their behavior to provide some guidance for possible applications."],["dc.identifier.doi","10.1007/s00422-007-0209-6"],["dc.identifier.isi","000253623800006"],["dc.identifier.pmid","18196266"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?goescholar/3068"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/54790"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0340-1200"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Mathematical properties of neuronal TD-rules and differential Hebbian learning: a comparison"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2008Journal Article [["dc.bibliographiccitation.firstpage","1173"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","Neural Computation"],["dc.bibliographiccitation.lastpage","1202"],["dc.bibliographiccitation.volume","21"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2017-09-07T11:45:31Z"],["dc.date.available","2017-09-07T11:45:31Z"],["dc.date.issued","2008"],["dc.description.abstract","In this theoretical contribution, we provide mathematical proof that two of the most important classes of network learning—correlation-based differential Hebbian learning and reward-based temporal difference learning—are asymptotically equivalent when timing the learning with a modulatory signal. This opens the opportunity to consistently reformulate most of the abstract reinforcement learning framework from a correlation-based perspective more closely related to the biophysics of neurons."],["dc.identifier.doi","10.1162/neco.2008.04-08-750"],["dc.identifier.gro","3151803"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/8632"],["dc.language.iso","en"],["dc.notes.status","public"],["dc.notes.submitter","chake"],["dc.relation.issn","0899-7667"],["dc.title","On the Asymptotic Equivalence Between Differential Hebbian and Temporal Difference Learning"],["dc.type","journal_article"],["dc.type.internalPublication","unknown"],["dc.type.peerReviewed","no"],["dspace.entity.type","Publication"]]Details DOI2013Journal Article [["dc.bibliographiccitation.artnumber","e1003307"],["dc.bibliographiccitation.issue","10"],["dc.bibliographiccitation.journal","PLoS Computational Biology"],["dc.bibliographiccitation.volume","9"],["dc.contributor.author","Tetzlaff, Christian"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Timme, Marc"],["dc.contributor.author","Tsodyks, Misha"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T09:18:47Z"],["dc.date.available","2018-11-07T09:18:47Z"],["dc.date.issued","2013"],["dc.description.abstract","Memory storage in the brain relies on mechanisms acting on time scales from minutes, for long-term synaptic potentiation, to days, for memory consolidation. During such processes, neural circuits distinguish synapses relevant for forming a long-term storage, which are consolidated, from synapses of short-term storage, which fade. How time scale integration and synaptic differentiation is simultaneously achieved remains unclear. Here we show that synaptic scaling - a slow process usually associated with the maintenance of activity homeostasis - combined with synaptic plasticity may simultaneously achieve both, thereby providing a natural separation of short-from long-term storage. The interaction between plasticity and scaling provides also an explanation for an established paradox where memory consolidation critically depends on the exact order of learning and recall. These results indicate that scaling may be fundamental for stabilizing memories, providing a dynamic link between early and late memory formation processes."],["dc.identifier.doi","10.1371/journal.pcbi.1003307"],["dc.identifier.isi","000330355300055"],["dc.identifier.pmid","24204240"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/9440"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/28483"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Public Library Science"],["dc.relation.issn","1553-7358"],["dc.rights","CC BY 2.5"],["dc.rights.uri","https://creativecommons.org/licenses/by/2.5"],["dc.title","Synaptic Scaling Enables Dynamically Distinct Short- and Long-Term Memory Formation"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2011Journal Article [["dc.bibliographiccitation.artnumber","P372"],["dc.bibliographiccitation.issue","Suppl 1"],["dc.bibliographiccitation.journal","BMC Neuroscience"],["dc.bibliographiccitation.volume","12"],["dc.contributor.author","Tetzlaff, Christian"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Timme, Marc"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2011-07-22T22:25:45Z"],["dc.date.accessioned","2011-07-23T15:34:56Z"],["dc.date.accessioned","2021-10-11T11:26:03Z"],["dc.date.available","2011-07-22T22:25:45Z"],["dc.date.available","2011-07-23T15:34:56Z"],["dc.date.available","2021-10-11T11:26:03Z"],["dc.date.issued","2011"],["dc.date.updated","2011-07-22T22:25:46Z"],["dc.identifier.doi","10.1186/1471-2202-12-S1-P372"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/6832"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/90535"],["dc.language.iso","en"],["dc.notes.intern","Merged from goescholar"],["dc.relation.orgunit","Fakultät für Physik"],["dc.rights","CC BY 2.0"],["dc.rights.access","openAccess"],["dc.rights.holder","et al.; licensee BioMed Central Ltd."],["dc.rights.uri","http://creativecommons.org/licenses/by/2.0/"],["dc.subject.ddc","530"],["dc.subject.ddc","573"],["dc.subject.ddc","573.8"],["dc.subject.ddc","612"],["dc.subject.ddc","612.8"],["dc.title","Synaptic scaling generically stabilizes circuit connectivity"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI2011Journal Article [["dc.bibliographiccitation.firstpage","560"],["dc.bibliographiccitation.issue","6"],["dc.bibliographiccitation.journal","Neural Networks"],["dc.bibliographiccitation.lastpage","567"],["dc.bibliographiccitation.volume","24"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","McCabe, Lynsey"],["dc.contributor.author","di Prodi, Paolo"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:54:08Z"],["dc.date.available","2018-11-07T08:54:08Z"],["dc.date.issued","2011"],["dc.description.abstract","It has been shown that plasticity is not a fixed property but, in fact, changes depending on the location of the synapse on the neuron and/or changes of biophysical parameters. Here, we investigate how plasticity is shaped by feedback inhibition in a cortical microcircuit. We use a differential Hebbian learning rule to model spike-timing-dependent plasticity and show analytically that the feedback inhibition shortens the time window for LTD during spike-timing-dependent plasticity but not for LTP. We then use a realistic GENESIS model to test two hypothesis about interneuron hypofunction and conclude that a reduction in GAD67 is the most likely candidate as the cause for hypofrontality as observed in Schizophrenia. (C) 2011 Elsevier Ltd. All rights reserved."],["dc.identifier.doi","10.1016/j.neunet.2011.03.004"],["dc.identifier.isi","000292409800007"],["dc.identifier.pmid","21477988"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/22600"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Pergamon-elsevier Science Ltd"],["dc.relation.issn","0893-6080"],["dc.title","How feedback inhibition shapes spike-timing-dependent plasticity and its implications for recent Schizophrenia models"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2015Journal Article [["dc.bibliographiccitation.firstpage","666"],["dc.bibliographiccitation.journal","Information Sciences"],["dc.bibliographiccitation.lastpage","682"],["dc.bibliographiccitation.volume","294"],["dc.contributor.author","Ren, Guanjiao"],["dc.contributor.author","Chen, Weihai"],["dc.contributor.author","Dasgupta, Sakyasingha"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Manoonpong, Poramate"],["dc.date.accessioned","2018-11-07T10:01:00Z"],["dc.date.available","2018-11-07T10:01:00Z"],["dc.date.issued","2015"],["dc.description.abstract","An originally chaotic system can be controlled into various periodic dynamics. When it is implemented into a legged robot's locomotion control as a central pattern generator (CPG), sophisticated gait patterns arise so that the robot can perform various walking behaviors. However, such a single chaotic CPG controller has difficulties dealing with leg malfunction. Specifically, in the scenarios presented here, its movement permanently deviates from the desired trajectory. To address this problem, we extend the single chaotic CPG to multiple CPGs with learning. The learning mechanism is based on a simulated annealing algorithm. In a normal situation, the CPGs synchronize and their dynamics are identical. With leg malfunction or disability, the CPGs lose synchronization leading to independent dynamics. In this case, the learning mechanism is applied to automatically adjust the remaining legs' oscillation frequencies so that the robot adapts its locomotion to deal with the malfunction. As a consequence, the trajectory produced by the multiple chaotic CPGs resembles the original trajectory far better than the one produced by only a single CPG. The performance of the system is evaluated first in a physical simulation of a quadruped as well as a hexapod robot and finally in a real six-legged walking machine called AMOSII. The experimental results presented here reveal that using multiple CPGs with learning is an effective approach for adaptive locomotion generation where, for instance, different body parts have to perform independent movements for malfunction compensation. (C) 2014 Elsevier Inc. All rights reserved."],["dc.identifier.doi","10.1016/j.ins.2014.05.001"],["dc.identifier.isi","000346542800047"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/37925"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Inc"],["dc.relation.issn","1872-6291"],["dc.relation.issn","0020-0255"],["dc.title","Multiple chaotic central pattern generators with learning for legged locomotion and malfunction compensation"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI WOS2013Journal Article [["dc.bibliographiccitation.artnumber","UNSP 183"],["dc.bibliographiccitation.journal","Frontiers in Computational Neuroscience"],["dc.bibliographiccitation.volume","7"],["dc.contributor.author","Faghihi, Faramarz"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Fiala, Andre"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Tetzlaff, Christian"],["dc.date.accessioned","2018-11-07T09:16:28Z"],["dc.date.available","2018-11-07T09:16:28Z"],["dc.date.issued","2013"],["dc.description.abstract","Fruit flies (Drosophila melanogaster) rely on their olfactory system to process environmental information. This information has to be transmitted without system-relevant loss by the olfactory system to deeper brain areas for learning. Here we study the role of several parameters of the fly's olfactory system and the environment and how they influence olfactory information transmission. We have designed an abstract model of the antennal lobe, the mushroom body and the inhibitory circuitry. Mutual information between the olfactory environment, simulated in terms of different odor concentrations, and a sub-population of intrinsic mushroom body neurons (Kenyon cells) was calculated to quantify the efficiency of information transmission. With this method we study, on the one hand, the effect of different connectivity rates between olfactory projection neurons and firing thresholds of Kenyon cells. On the other hand, we analyze the influence of inhibition on mutual information between environment and mushroom body. Our simulations show an expected linear relation between the connectivity rate between the antennal lobe and the mushroom body and firing threshold of the Kenyon cells to obtain maximum mutual information for both low and high odor concentrations. However, contradicting all-day experiences, high odor concentrations cause a drastic, and unrealistic, decrease in mutual information for all connectivity rates compared to low concentration. But when inhibition on the mushroom body is included, mutual information remains at high levels independent of other system parameters. This finding points to a pivotal role of inhibition in fly information processing without which the system efficiency will be substantially reduced."],["dc.description.sponsorship","Open-Access-Publikationsfonds 2014"],["dc.identifier.doi","10.3389/fncom.2013.00183"],["dc.identifier.fs","603070"],["dc.identifier.isi","000329170400001"],["dc.identifier.pmid","24391579"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/9796"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/27943"],["dc.notes.intern","Merged from goescholar"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Frontiers Media Sa"],["dc.relation.issn","1662-5188"],["dc.relation.orgunit","Fakultät für Physik"],["dc.rights","CC BY 3.0"],["dc.rights.uri","https://creativecommons.org/licenses/by/3.0"],["dc.title","An information theoretic model of information processing in the Drosophila olfactory system: the role of inhibitory neurons for system efficiency"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS