[Back]


Talks and Poster Presentations (with Proceedings-Entry):

R. Sabou, D. Winkler, S. Petrovic:
"Expert Sourcing to support the Identification of Model Elements in System Descriptions";
Talk: 10th Software Quality Days, Vienna; 2018-01-16 - 2018-01-19; in: "Software Quality. Methods and Tools for better Software and Systems. Proceedings of the 10th Software Quality Days, Scientific Program, Lecture Notes in Business Information Processing, LNBIP, Volume 302", Springer International Publishing, 302 (2018), ISBN: 978-3-319-71439-4; 83 - 99.



English abstract:
Context. Expert sourcing is a novel approach to support quality
assurance: it relies on methods and tooling from crowdsourcing research to split
model quality assurance tasks and parallelize task execution across several
expert users. Typical quality assurance tasks focus on checking an inspection
object, e.g., a model, towards a reference document, e.g., a requirements
specification, that is considered to be correct. For example, given a text-based
system description and a corresponding model such as an Extended Entity
Relationship (EER) diagram, experts are guided towards inspecting the model
based on so-called expected model elements (EMEs). EMEs are entities, attributes and relations that appear in text and are reflected by the corresponding
model. In common inspection tasks, EMEs are not explicitly expressed but
implicitly available via textual descriptions. Thus, a main improvement is to
make EMEs explicit by using crowdsourcing mechanisms to drive model quality
assurance among experts. Objective and Method. In this paper, we investigate
the effectiveness of identifying the EMEs through expert sourcing. To that end,
we perform a feasibility study in which we compare EMEs identified through
expert sourcing with EMEs provided by a task owner who has a deep knowledge of the entire system specification text. Conclusions. Results of the data
analysis show that the effectiveness of the crowdsourcing-style EME acquisition
is influenced by the complexity of these EMEs: entity EMEs can be harvested
with high recall and precision, but the lexical and semantic variations of attribute
EMEs hamper their automatic aggregation and reaching consensus (these EMEs
are harvested with high precisions but limited recall). Based on these lessons
learned we propose a new task design for expert sourcing EMEs.

Keywords:
Review, Models, Model quality assurance, Model elements Empirical study, Feasibility study, Crowdsourcing,Task design


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1007/978-3-319-71440-0_5


Created from the Publication Database of the Vienna University of Technology.