Options
Herbold, Steffen
Loading...
Preferred name
Herbold, Steffen
Official Name
Herbold, Steffen
Alternative Name
Herbold, S.
Main Affiliation
Now showing 1 - 6 of 6
2017Journal Article [["dc.bibliographiccitation.firstpage","309"],["dc.bibliographiccitation.issue","3"],["dc.bibliographiccitation.journal","International Journal on Software Tools for Technology Transfer"],["dc.bibliographiccitation.lastpage","324"],["dc.bibliographiccitation.volume","19"],["dc.contributor.author","Herbold, Steffen"],["dc.contributor.author","Harms, Patrick"],["dc.contributor.author","Grabowski, Jens"],["dc.date.accessioned","2018-11-07T10:23:31Z"],["dc.date.available","2018-11-07T10:23:31Z"],["dc.date.issued","2017"],["dc.description.abstract","Usage-based testing focuses quality assurance on highly used parts of the software. The basis for this are usage profiles based on which test cases are generated. There are two fundamental approaches in usage-based testing for deriving usage profiles: either the system under test (SUT) is observed during its operation and from the obtained usage data a usage profile is automatically inferred, or a usage profile is modeled by hand within a model-based testing (MBT) approach. In this article, we propose a third and combined approach, where we automatically infer a usage profile and create a test data repository from usage data. Then, we create representations of the generated tests and test data in the test model from an MBT approach. The test model enables us to generate executable Testing and Test Control Notation version 3 (TTCN-3) and thereby allows us to automate the test execution. Together with industrial partners, we adopted this approach in two pilot studies. Our findings show that usage-based testing can be applied in practice and greatly helps with the automation of tests. Moreover, we found that even if usage-based testing is not of interest, the incorporation of usage data can ease the application of MBT."],["dc.identifier.doi","10.1007/s10009-016-0437-y"],["dc.identifier.fs","626812"],["dc.identifier.isi","000400981200003"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/42473"],["dc.identifier.url","http://rdcu.be/v5cZ"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.notes.submitter","PUB_WoS_Import"],["dc.relation.issn","1433-2787"],["dc.relation.issn","1433-2779"],["dc.title","Combining usage-based and model-based testing for service-oriented architectures in the industrial practice"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dc.type.peerReviewed","yes"],["dspace.entity.type","Publication"]]Details DOI WOS2013Conference Paper [["dc.bibliographiccitation.firstpage","134"],["dc.bibliographiccitation.lastpage","139"],["dc.contributor.author","Herbold, Steffen"],["dc.contributor.author","Harms, Patrick"],["dc.date.accessioned","2019-07-24T14:23:02Z"],["dc.date.available","2019-07-24T14:23:02Z"],["dc.date.issued","2013"],["dc.description.abstract","In this paper, we present AutoQUEST, a testing platform for Event-Driven Software (EDS) that decouples the implementation of testing techniques from the concrete platform they should be applied to. AutoQUEST provides the means to define testing techniques against an abstract Application Programming Interface (API) and provides plugins to port the testing techniques to distinct platforms. The requirements on plug-in implementations for AutoQUEST are kept low to keep the porting effort low. We implemented several testing techniques on top of AutoQUEST and provide five plugins for concrete software platforms, which demonstrates the capabililities of our approach."],["dc.identifier.doi","10.1109/ICSTW.2013.23"],["dc.identifier.fs","626752"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/62029"],["dc.language.iso","en"],["dc.notes.status","fcwi"],["dc.publisher","IEEE"],["dc.relation.conference","IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops (ICSTW 2013)"],["dc.relation.eventend","2013-03-22"],["dc.relation.eventlocation","Luxembourg"],["dc.relation.eventstart","2013-03-18"],["dc.relation.isbn","978-0-7695-4993-4"],["dc.relation.isbn","978-1-4799-1324-4"],["dc.relation.ispartof","Proceedings. IEEE Sixth International Conference on Software Testing, Verification and Validation Workshops"],["dc.title","AutoQUEST - Automated Quality Engineering of Event-driven Software"],["dc.type","conference_paper"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details DOI2014Journal Article [["dc.bibliographiccitation.firstpage","450"],["dc.bibliographiccitation.issue","3-4"],["dc.bibliographiccitation.journal","International Journal on Advances in Intelligent Systems"],["dc.bibliographiccitation.lastpage","467"],["dc.bibliographiccitation.volume","7"],["dc.contributor.author","Harms, Patrick"],["dc.contributor.author","Herbold, Steffen"],["dc.contributor.author","Grabowski, Jens"],["dc.date.accessioned","2019-07-24T14:31:52Z"],["dc.date.available","2019-07-24T14:31:52Z"],["dc.date.issued","2014"],["dc.description.abstract","Task trees are a well-known way for the manual modeling of user interactions. They provide an ideal basis for software analysis including usability evaluations if they are generated based on usage traces. In this paper, we present an approach for the automated generation of task trees based on traces of user interactions. For this, we utilize usage monitors to record all events caused by users. These events are written into log files from which we generate task trees. The generation mechanism covers the detection of iterations, of common usage sequences, and the merging of similar variants of semantically equal sequence. We validate our method in two case studies and show that it is able to generate task trees representing actual user behavior."],["dc.identifier.fs","626751"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/62030"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.relation.issn","1942-2679"],["dc.title","Extended Trace-based Task Tree Generation"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details2019Journal Article [["dc.bibliographiccitation.firstpage","315"],["dc.bibliographiccitation.journal","Advances in Computers"],["dc.bibliographiccitation.lastpage","344"],["dc.bibliographiccitation.volume","113"],["dc.contributor.author","Herbold, Steffen"],["dc.contributor.author","Trautsch, Fabian"],["dc.contributor.author","Harms, Patrick"],["dc.contributor.author","Herbold, Verena"],["dc.contributor.author","Grabowski, Jens"],["dc.date.accessioned","2019-07-24T14:03:48Z"],["dc.date.available","2019-07-24T14:03:48Z"],["dc.date.issued","2019"],["dc.description.abstract","Replications and replicable research are currently gaining traction in the software engineering research community. Our research group made an effort in the recent years to make our own research accessible for other researchers, through the provision of replication kits that allow rerunning our experiments. Within this chapter, we present our experiences with replication kits. We first had to learn which contents are required, how to structure them, how to document them, and also how to best share them with other researchers. While this sounds very straightforward, there are many small potential mistakes, which may have a strong negative impact on the usefulness and long-term availability of replication kits. We derive best practices for the content and the sharing of replication kits based on our experiences. Moreover, we outline how platforms for replicable research may further help our community, especially with problems related to the external validity of results. Finally, we discuss the lack of integration of replication kits into most current review processes at conferences and journals. We also give one example for a review process into which replication kits were well-integrated. Altogether, this chapter demonstrates that making research replicable is a challenging task and there is a long road ahead until our community has a generally accepted and enforced standard of replicability."],["dc.identifier.doi","10.1016/bs.adcom.2018.10.003"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/62024"],["dc.language.iso","en"],["dc.relation.isbn","978-0-12-816070-1"],["dc.relation.issn","0065-2458"],["dc.title","Experiences With Replicable Experiments and Replication Kits for Software Engineering Research"],["dc.type","journal_article"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details DOI2015Conference Paper [["dc.bibliographiccitation.firstpage","1"],["dc.bibliographiccitation.lastpage","8"],["dc.contributor.author","Herbold, Steffen"],["dc.contributor.author","De Francesco, Alberto"],["dc.contributor.author","Grabowski, Jens"],["dc.contributor.author","Harms, Patrick"],["dc.contributor.author","Hillah, Lom M."],["dc.contributor.author","Kordon, Fabrice"],["dc.contributor.author","Maesano, Ariele-Paolo"],["dc.contributor.author","Maesano, Libero"],["dc.contributor.author","Di Napoli, Claudia"],["dc.contributor.author","De Rosa, Fabio"],["dc.contributor.author","Schneider, Martin A."],["dc.contributor.author","Tonellotto, Nicola"],["dc.contributor.author","Wendland, Marc-Florian"],["dc.contributor.author","Wuillemin, Pierre-Henri"],["dc.date.accessioned","2019-07-24T14:15:04Z"],["dc.date.available","2019-07-24T14:15:04Z"],["dc.date.issued","2015"],["dc.description.abstract","While Service Oriented Architectures (SOAs) are for many parts deployed online, and today often in a cloud, the testing of the systems still happens mostly locally. In this paper, we want to present the MIDAS Testing as a Service (TaaS), a cloud platform for the testing of SOAs. We focus on the testing of whole SOA orchestrations, a complex task due to the number of potential service interactions and the increasing complexity with each service that joins an orchestration. Since traditional testing does not scale well with such a complex setup, we employ a Model-based Testing (MBT) approach based on the Unified Modeling Language (UML) and the UML Testing Profile (UTP) within MIDAS. Through this, we provide methods for functional testing, security testing, and usage-based testing of service orchestrations. Through harnessing the computational power of the cloud, MIDAS is able to generate and execute complex test scenarios which would be infeasible to run in a local environment."],["dc.identifier.doi","10.1109/ICST.2015.7102636"],["dc.identifier.fs","626752"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/62027"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.publisher","IEEE"],["dc.relation.conference","8th International Conference on Software Testing, Verification and Validation (ICST)"],["dc.relation.eventend","2015-04-17"],["dc.relation.eventlocation","Graz, Austria"],["dc.relation.eventstart","2015-04-13"],["dc.relation.isbn","978-1-4799-7125-1"],["dc.relation.ispartof","2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)"],["dc.relation.issn","2159-4848"],["dc.title","The MIDAS Cloud Platform for Testing SOA Applications"],["dc.type","conference_paper"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details DOI2014Conference Paper [["dc.bibliographiccitation.firstpage","337"],["dc.bibliographiccitation.lastpage","342"],["dc.contributor.author","Harms, Patrick"],["dc.contributor.author","Herbold, Steffen"],["dc.contributor.author","Grabowski, Jens"],["dc.contributor.editor","Miller, Leslie"],["dc.contributor.editor","Culén, Alma Leora"],["dc.date.accessioned","2019-07-31T13:55:25Z"],["dc.date.available","2019-07-31T13:55:25Z"],["dc.date.issued","2014"],["dc.description.abstract","Task trees are a well-known way for the manual modeling of user interactions. They provide an ideal basis for software analysis including usability evaluations if they are generated based on usage traces. In this paper, we present a method for the automated generation of task trees based on traces of user interactions. For this, we utilize usage monitors to record all events caused by users. These events are written into log files from which we generate task trees. We validate our method in three case studies."],["dc.identifier.fs","626751"],["dc.identifier.uri","https://resolver.sub.uni-goettingen.de/purl?gro-2/62246"],["dc.language.iso","en"],["dc.notes.status","final"],["dc.publisher","IARA"],["dc.relation.conference","Seventh International Conference on Advances in Computer-Human Interactions (ACHI 2014)"],["dc.relation.eventend","2014-03-27"],["dc.relation.eventlocation","Barcelona, Spain"],["dc.relation.eventstart","2014-03-23"],["dc.relation.isbn","978-1-61208-325-4"],["dc.relation.ispartof","ACHI 2014. The Seventh International Conference on Advances in Computer-Human Interactions"],["dc.title","Trace-based Task Tree Generation"],["dc.type","conference_paper"],["dc.type.internalPublication","yes"],["dspace.entity.type","Publication"]]Details