Options
Lüddecke, Timo
Loading...
Preferred name
Lüddecke, Timo
Official Name
Lüddecke, Timo
Alternative Name
Lüddecke, T.
Lueddecke, Timo
Lueddecke, T.
Luddecke, Timo
Luddecke, T.
Main Affiliation
Now showing 1 - 5 of 5
2019Journal Article [["dc.bibliographiccitation.firstpage","973"],["dc.bibliographiccitation.issue","2"],["dc.bibliographiccitation.journal","IEEE Robotics and Automation Letters"],["dc.bibliographiccitation.lastpage","980"],["dc.bibliographiccitation.volume","4"],["dc.contributor.author","Langenberg, Tristan"],["dc.contributor.author","Luddecke, Timo"],["dc.contributor.author","Worgotter, Florentin"],["dc.date.accessioned","2022-06-08T08:00:22Z"],["dc.date.available","2022-06-08T08:00:22Z"],["dc.date.issued","2019"],["dc.identifier.doi","10.1109/LRA.2019.2893446"],["dc.identifier.eissn","2377-3766"],["dc.identifier.eissn","2377-3774"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/111055"],["dc.notes.intern","DOI-Import GROB-575"],["dc.relation.eissn","2377-3766"],["dc.relation.eissn","2377-3774"],["dc.title","Deep Metadata Fusion for Traffic Light to Lane Assignment"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details DOI2019Journal Article [["dc.bibliographiccitation.firstpage","92"],["dc.bibliographiccitation.journal","Robotics and Autonomous Systems"],["dc.bibliographiccitation.lastpage","107"],["dc.bibliographiccitation.volume","119"],["dc.contributor.author","Lüddecke, Timo"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Wörgötter, Florentin Andreas"],["dc.date.accessioned","2019-07-22T14:52:54Z"],["dc.date.available","2019-07-22T14:52:54Z"],["dc.date.issued","2019"],["dc.identifier.doi","10.1016/j.robot.2019.05.005"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/61837"],["dc.language.iso","en"],["dc.relation.issn","0921-8890"],["dc.title","Context-based affordance segmentation from 2D images for robot actions"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details DOI2019Journal Article [["dc.bibliographiccitation.firstpage","44"],["dc.bibliographiccitation.journal","Artificial Intelligence"],["dc.bibliographiccitation.lastpage","65"],["dc.bibliographiccitation.volume","274"],["dc.contributor.author","Lüddecke, Timo"],["dc.contributor.author","Agostini, Alejandro"],["dc.contributor.author","Fauth, Michael"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2019-07-15T08:04:50Z"],["dc.date.available","2019-07-15T08:04:50Z"],["dc.date.issued","2019"],["dc.description.abstract","The distributional hypothesis states that the meaning of a concept is defined through the contexts it occurs in. In practice, often word co-occurrence and proximity are analyzed in text corpora for a given word to obtain a real-valued semantic word vector, which is taken to (at least partially) encode the meaning of this word. Here we transfer this idea from text to images, where pre-assigned labels of other objects or activations of convolutional neural networks serve as context. We propose a simple algorithm that extracts and processes object contexts from an image database and yields semantic vectors for objects. We show empirically that these representations exhibit on par performance with state-of-the-art distributional models over a set of conventional objects. For this we employ well-known word benchmarks in addition to a newly proposed object-centric benchmark."],["dc.identifier.doi","10.1016/j.artint.2018.12.009"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/16274"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/61490"],["dc.language.iso","en"],["dc.notes.intern","Merged from goescholar"],["dc.notes.status","final"],["dc.relation","info:eu-repo/grantAgreement/EC/H2020/731761/EU//IMAGINE"],["dc.relation.issn","0004-3702"],["dc.relation.orgunit","Fakultät für Physik"],["dc.rights","CC BY-NC-ND 4.0"],["dc.rights.uri","https://creativecommons.org/licenses/by-nc-nd/4.0/"],["dc.title","Distributional semantics of objects in visual scenes in comparison to text"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI2021Journal Article [["dc.bibliographiccitation.journal","Frontiers in Neurorobotics"],["dc.bibliographiccitation.volume","14"],["dc.contributor.author","Kulvicius, Tomas"],["dc.contributor.author","Herzog, Sebastian"],["dc.contributor.author","Lüddecke, Timo"],["dc.contributor.author","Tamosiunaite, Minija"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2021-04-14T08:29:51Z"],["dc.date.available","2021-04-14T08:29:51Z"],["dc.date.issued","2021"],["dc.description.abstract","Path planning plays a crucial role in many applications in robotics for example for planning an arm movement or for navigation. Most of the existing approaches to solve this problem are iterative, where a path is generated by prediction of the next state from the current state. Moreover, in case of multi-agent systems, paths are usually planned for each agent separately (decentralized approach). In case of centralized approaches, paths are computed for each agent simultaneously by solving a complex optimization problem, which does not scale well when the number of agents increases. In contrast to this, we propose a novel method, using a homogeneous, convolutional neural network, which allows generation of complete paths, even for more than one agent, in one-shot, i.e., with a single prediction step. First we consider single path planning in 2D and 3D mazes. Here, we show that our method is able to successfully generate optimal or close to optimal (in most of the cases \\u0026lt;10% longer) paths in more than 99.5% of the cases. Next we analyze multi-paths either from a single source to multiple end-points or vice versa. Although the model has never been trained on multiple paths, it is also able to generate optimal or near-optimal (\\u0026lt;22% longer) paths in 96.4 and 83.9% of the cases when generating two and three paths, respectively. Performance is then also compared to several state of the art algorithms."],["dc.description.sponsorship","Open-Access-Publikationsfonds 2020"],["dc.identifier.doi","10.3389/fnbot.2020.600984"],["dc.identifier.purl","https://resolver.sub.uni-goettingen.de/purl?gs-1/17791"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/83006"],["dc.language.iso","en"],["dc.notes.intern","DOI Import GROB-399"],["dc.notes.intern","Merged from goescholar"],["dc.publisher","Frontiers Media S.A."],["dc.relation.eissn","1662-5218"],["dc.relation.orgunit","Fakultät für Physik"],["dc.rights","CC BY 4.0"],["dc.rights.uri","http://creativecommons.org/licenses/by/4.0/"],["dc.title","One-Shot Multi-Path Planning Using Fully Convolutional Networks in a Comparison to Other Algorithms"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.version","published_version"],["dspace.entity.type","Publication"]]Details DOI2020Journal Article [["dc.bibliographiccitation.firstpage","103511"],["dc.bibliographiccitation.journal","Robotics and Autonomous Systems"],["dc.bibliographiccitation.volume","129"],["dc.contributor.author","Lüddecke, Timo"],["dc.contributor.author","Wörgötter, Florentin"],["dc.date.accessioned","2021-04-14T08:25:54Z"],["dc.date.available","2021-04-14T08:25:54Z"],["dc.date.issued","2020"],["dc.identifier.doi","10.1016/j.robot.2020.103511"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/81762"],["dc.language.iso","en"],["dc.notes.intern","DOI Import GROB-399"],["dc.relation.issn","0921-8890"],["dc.title","Fine-grained action plausibility rating"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details DOI