Options
Tamosiunaite, Minija
Loading...
Preferred name
Tamosiunaite, Minija
Official Name
Tamosiunaite, Minija
Alternative Name
Tamosiunaite, M.
Tamošiunaite, Minija
Tamošiunaite, M.
Main Affiliation
Now showing 1 - 10 of 32
2011Journal Article [["dc.bibliographiccitation.firstpage","910"],["dc.bibliographiccitation.issue","11"],["dc.bibliographiccitation.journal","Robotics and Autonomous Systems"],["dc.bibliographiccitation.lastpage","922"],["dc.bibliographiccitation.volume","59"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Nemec, Bojan"],["dc.contributor.author","Ude, Ales"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:50:32Z"],["dc.date.available","2018-11-07T08:50:32Z"],["dc.date.issued","2011"],["dc.description.abstract","When describing robot motion with dynamic movement primitives (DMPs), goal (trajectory endpoint), shape and temporal scaling parameters are used. In reinforcement learning with DMPs, usually goals and temporal scaling parameters are predefined and only the weights for shaping a DMP are learned. Many tasks, however, exist where the best goal position is not a priori known, requiring to learn it. Thus, here we specifically address the question of how to simultaneously combine goal and shape parameter learning. This is a difficult problem because learning of both parameters could easily interfere in a destructive way. We apply value function approximation techniques for goal learning and direct policy search methods for shape learning. Specifically, we use \"policy improvement with path integrals\" and \"natural actor critic\" for the policy search. We solve a learning-to-pour-liquid task in simulations as well as using a Pa10 robot arm. Results for learning from scratch, learning initialized by human demonstration, as well as for modifying the tool for the learned DMPs are presented. We observe that the combination of goal and shape learning is stable and robust within large parameter regimes. Learning converges quickly even in the presence of disturbances, which makes this combined method suitable for robotic applications. (C) 2011 Elsevier B.V. All rights reserved."],["dc.identifier.doi","10.1016/j.robot.2011.07.004"],["dc.identifier.isi","000295912100005"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/21715"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Elsevier Science Bv"],["dc.relation.issn","1872-793X"],["dc.relation.issn","0921-8890"],["dc.title","Learning to pour with a robot arm combining goal and shape learning for dynamic movement primitives"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI WOS2008Journal Article [["dc.bibliographiccitation.firstpage","481"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Journal of Computational Neuroscience"],["dc.bibliographiccitation.lastpage","500"],["dc.bibliographiccitation.volume","25"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Ainge, James A."],["dc.contributor.author","Dudchenko, Paul A."],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T11:08:36Z"],["dc.date.available","2018-11-07T11:08:36Z"],["dc.date.issued","2008"],["dc.description.abstract","Experiments with rodents demonstrate that visual cues play an important role in the control of hippocampal place cells and spatial navigation. Nevertheless, rats may also rely on auditory, olfactory and somatosensory stimuli for orientation. It is also known that rats can track odors or self-generated scent marks to find a food source. Here we model odor supported place cells by using a simple feed-forward network and analyze the impact of olfactory cues on place cell formation and spatial navigation. The obtained place cells are used to solve a goal navigation task by a novel mechanism based on self-marking by odor patches combined with a Q-learning algorithm. We also analyze the impact of place cell remapping on goal directed behavior when switching between two environments. We emphasize the importance of olfactory cues in place cell formation and show that the utility of environmental and self-generated olfactory cues, together with a mixed navigation strategy, improves goal directed navigation."],["dc.description.sponsorship","Biotechnology and Biological Sciences Research Council [BB/C516079/1]"],["dc.identifier.doi","10.1007/s10827-008-0090-x"],["dc.identifier.isi","000259438100005"],["dc.identifier.pmid","18431616"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?goescholar/3067"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/52823"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","1573-6873"],["dc.relation.issn","0929-5313"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Odor supported place cell model and goal navigation in rodents"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2012Journal Article [["dc.bibliographiccitation.firstpage","145"],["dc.bibliographiccitation.issue","1"],["dc.bibliographiccitation.journal","IEEE Transactions on Robotics"],["dc.bibliographiccitation.lastpage","157"],["dc.bibliographiccitation.volume","28"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Ning, KeJun"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T09:13:47Z"],["dc.date.available","2018-11-07T09:13:47Z"],["dc.date.issued","2012"],["dc.description.abstract","The generation of complex movement patterns, in particular, in cases where one needs to smoothly and accurately join trajectories in a dynamic way, is an important problem in robotics. This paper presents a novel joining method that is based on the modification of the original dynamic movement primitive formulation. The new method can reproduce the target trajectory with high accuracy regarding both the position and the velocity profile and produces smooth and natural transitions in position space, as well as in velocity space. The properties of the method are demonstrated by its application to simulated handwriting generation, which are also shown on a robot, where an adaptive algorithm is used to learn trajectories from human demonstration. These results demonstrate that the new method is a feasible alternative for joining of movement sequences, which has a high potential for all robotics applications where trajectory joining is required."],["dc.identifier.doi","10.1109/TRO.2011.2163863"],["dc.identifier.isi","000300188300012"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/27247"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Ieee-inst Electrical Electronics Engineers Inc"],["dc.relation.issn","1552-3098"],["dc.title","Joining Movement Sequences: Modified Dynamic Movement Primitives for Robotics Applications Exemplified on Handwriting"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI WOS2015Journal Article [["dc.bibliographiccitation.artnumber","UNSP 1427"],["dc.bibliographiccitation.journal","Frontiers in Psychology"],["dc.bibliographiccitation.volume","6"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Stein, Simon C."],["dc.contributor.author","Sutterlütti, Rahel M."],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2018-11-07T09:51:30Z"],["dc.date.available","2018-11-07T09:51:30Z"],["dc.date.issued","2015"],["dc.description.abstract","Objects usually consist of parts and the question arises whether there are perceptual features which allow breaking down an object into its fundamental parts without any additional (e.g., functional) information. As in the first paper of this sequence, we focus on the division of our world along convex to concave surface transitions. Here we are using machine vision to produce convex segments from 3D-scenes. We assume that a fundamental part is one, which we can easily name while at the same time there is no natural subdivision possible into smaller parts. Hence in this experiment we presented the computer vision generated segments to our participants and asked whether they can identify and name them. Additionally we control against segmentation reliability and we find a clear trend that reliable convex segments have a high degree of name ability. In addition, we observed that using other image-segmentation methods will not yield nameable entities. This indicates that convex-concave surface transition may indeed form the basis for dividing objects into meaningful entities. It appears that other or further subdivisions do not carry such a strong semantical link to our everyday language as there are no names for them."],["dc.description.sponsorship","European Community [270273]"],["dc.identifier.doi","10.3389/fpsyg.2015.01427"],["dc.identifier.isi","000362853400001"],["dc.identifier.pmid","26441797"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/12372"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/35929"],["dc.language.iso","en"],["dc.notes.intern","Merged from goescholar"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Frontiers Media S.A."],["dc.relation","info:eu-repo/grantAgreement/EC/FP7/270273/EU//Xperience"],["dc.relation.eissn","1664-1078"],["dc.relation.issn","1664-1078"],["dc.relation.orgunit","Fakultät für Physik"],["dc.rights","CC BY 4.0"],["dc.rights.uri","https://creativecommons.org/licenses/by/4.0"],["dc.title","Perceptual influence of elementary three-dimensional geometry: (2) fundamental object parts"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2021Journal Article [["dc.bibliographiccitation.artnumber","S1053811921008077"],["dc.bibliographiccitation.firstpage","118534"],["dc.bibliographiccitation.journal","NeuroImage"],["dc.bibliographiccitation.volume","243"],["dc.contributor.author","Pomp, Jennifer"],["dc.contributor.author","Heins, Nina"],["dc.contributor.author","Trempler, Ima"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Mecklenbrauck, Falko"],["dc.contributor.author","Wurm, Moritz F."],["dc.contributor.author","Wörgötter, Florentin"],["dc.contributor.author","Schubotz, Ricarda I."],["dc.date.accessioned","2021-10-01T09:57:38Z"],["dc.date.available","2021-10-01T09:57:38Z"],["dc.date.issued","2021"],["dc.identifier.doi","10.1016/j.neuroimage.2021.118534"],["dc.identifier.pii","S1053811921008077"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/89881"],["dc.language.iso","en"],["dc.notes.intern","DOI Import GROB-469"],["dc.relation.issn","1053-8119"],["dc.title","Touching events predict human action segmentation in brain and behavior"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details DOI2013Journal Article [["dc.bibliographiccitation.firstpage","117"],["dc.bibliographiccitation.issue","2"],["dc.bibliographiccitation.journal","IEEE Transactions on Autonomous Mental Development"],["dc.bibliographiccitation.lastpage","134"],["dc.bibliographiccitation.volume","5"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Aksoy, Eren Erdal"],["dc.contributor.author","Krueger, Norbert"],["dc.contributor.author","Piater, Justus"],["dc.contributor.author","Ude, Ales"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.date.accessioned","2018-11-07T09:23:59Z"],["dc.date.available","2018-11-07T09:23:59Z"],["dc.date.issued","2013"],["dc.description.abstract","Humans can perform a multitude of different actions with their hands (manipulations). In spite of this, so far there have been only a few attempts to represent manipulation types trying to understand the underlying principles. Here we first discuss how manipulation actions are structured in space and time. For this we use as temporal anchor points those moments where two objects (or hand and object) touch or un-touch each other during a manipulation. We show that by this one can define a relatively small tree-like manipulation ontology. We find less than 30 fundamental manipulations. The temporal anchors also provide us with information about when to pay attention to additional important information, for example when to consider trajectory shapes and relative poses between objects. As a consequence a highly condensed representation emerges by which different manipulations can be recognized and encoded. Examples of manipulations recognition and execution by a robot based on this representation are given at the end of this study."],["dc.description.sponsorship","European Community [270273, 269959]"],["dc.identifier.doi","10.1109/TAMD.2012.2232291"],["dc.identifier.isi","000320580500002"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/29716"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Ieee-inst Electrical Electronics Engineers Inc"],["dc.relation.issn","1943-0604"],["dc.title","A Simple Ontology of Manipulation Actions Based on Hand-Object Relations"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI WOS2010Journal Article [["dc.bibliographiccitation.firstpage","255"],["dc.bibliographiccitation.issue","4"],["dc.bibliographiccitation.journal","Biological Cybernetics"],["dc.bibliographiccitation.lastpage","271"],["dc.bibliographiccitation.volume","103"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Kolodziejski, Christoph"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:38:17Z"],["dc.date.available","2018-11-07T08:38:17Z"],["dc.date.issued","2010"],["dc.description.abstract","Understanding closed loop behavioral systems is a non-trivial problem, especially when they change during learning. Descriptions of closed loop systems in terms of information theory date back to the 1950s, however, there have been only a few attempts which take into account learning, mostly measuring information of inputs. In this study we analyze a specific type of closed loop system by looking at the input as well as the output space. For this, we investigate simulated agents that perform differential Hebbian learning (STDP). In the first part we show that analytical solutions can be found for the temporal development of such systems for relatively simple cases. In the second part of this study we try to answer the following question: How can we predict which system from a given class would be the best for a particular scenario? This question is addressed using energy, input/output ratio and entropy measures and investigating their development during learning. This way we can show that within well-specified scenarios there are indeed agents which are optimal with respect to their structure and adaptive properties."],["dc.identifier.doi","10.1007/s00422-010-0396-4"],["dc.identifier.isi","000281667700001"],["dc.identifier.pmid","20556620"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/5158"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/18732"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0340-1200"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Behavioral analysis of differential hebbian learning in closed-loop systems"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2010Journal Article [["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Journal of Computational Neuroscience"],["dc.bibliographiccitation.volume","28"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Ainge, James A."],["dc.contributor.author","Dudchenko, Paul A."],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:42:48Z"],["dc.date.available","2018-11-07T08:42:48Z"],["dc.date.issued","2010"],["dc.format.extent","617"],["dc.identifier.doi","10.1007/s10827-010-0216-9"],["dc.identifier.isi","000278406500018"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/6799"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/19787"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0929-5313"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Odour supported place cell model and goal navigation in rodents (vol 25, pg 481, 2008)"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI WOS2012Journal Article [["dc.bibliographiccitation.firstpage","534"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Hippocampus"],["dc.bibliographiccitation.lastpage","543"],["dc.bibliographiccitation.volume","22"],["dc.contributor.author","Ainge, James A."],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Woergoetter, Florentin"],["dc.contributor.author","Dudchenko, Paul A."],["dc.date.accessioned","2018-11-07T09:12:55Z"],["dc.date.available","2018-11-07T09:12:55Z"],["dc.date.issued","2012"],["dc.description.abstract","The firing of hippocampal place cells encodes instantaneous location but can also reflect where the animal is heading (prospective firing), or where it has just come from (retrospective firing). The current experiment sought to explicitly control the prospective firing of place cells with a visual discriminada in a T-maze. Rats were trained to associate a specific visual stimulus (e.g., a flashing light) with the occurrence of reward in a specific location (e.g., the left arm of the T). A different visual stimulus (e.g., a constant light) signaled the availability of reward in the opposite arm of the T. After this discrimination had been acquired, rats were implanted with electrodes in the CA1 layer of the hippocampus. Place cells were then identified and recorded as the animals performed the discrimination task, and the presentation of the visual stimulus was manipulated. A subset of CA1 place cells fired at different rates on the central stem of the T depending on the animal's intended destination, but this conditional or prospective firing was independent of the visual discriminative stimulus. The firing rate of some place cells was, however, modulated by changes in the timing of presentation of the visual stimulus. Thus, place cells fired prospectively, but this firing did not appear to be controlled, directly, by a salient visual stimulus that controlled behavior. (c) 2011 Wiley Periodicals, Inc."],["dc.identifier.doi","10.1002/hipo.20919"],["dc.identifier.isi","000300686900014"],["dc.identifier.pmid","21365712"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/27053"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Wiley-blackwell"],["dc.relation.issn","1050-9631"],["dc.title","Hippocampal place cells encode intended destination, and not a discriminative stimulus, in a conditional T-maze task"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dspace.entity.type","Publication"]]Details DOI PMID PMC WOS2010Journal Article [["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","Journal of Computational Neuroscience"],["dc.bibliographiccitation.volume","28"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Ainge, James A."],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Porr, Bernd"],["dc.contributor.author","Dudchenko, Paul A."],["dc.contributor.author","Woergoetter, Florentin"],["dc.date.accessioned","2018-11-07T08:42:48Z"],["dc.date.available","2018-11-07T08:42:48Z"],["dc.date.issued","2010"],["dc.format.extent","619"],["dc.identifier.doi","10.1007/s10827-010-0217-8"],["dc.identifier.isi","000278406500019"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/6800"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/19788"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","zu prüfen"],["dc.notes.submitter","Najko"],["dc.publisher","Springer"],["dc.relation.issn","0929-5313"],["dc.rights","Goescholar"],["dc.rights.uri","https://goescholar.uni-goettingen.de/licenses"],["dc.title","Path-finding in real and simulated rats: assessing the influence of path characteristics on navigation learning (vol 25, pg 562, 2008)"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dc.type.status","published"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI WOS