[Back]


Contributions to Proceedings:

G. Kellner, J. Grünauer:
"Algorithms for the Verification of the Semantic Relation Between a Compound and a Given Lexeme";
in: "i-KNOW '12: Proceedings of the 12th International Conference on Knowledge Management and Knowledge Technologies", issued by: ACM; ACM Press, New York, 2012, ISBN: 978-1-4503-1242-4, Paper ID 5, 8 pages.



English abstract:
Text mining on a lexical basis is quite well developed for the English language. In compounding languages, however, lexicalized words are often a combination of two or more semantic units. New words can be built easily by concatenating existing ones, without putting any white spaces in between.
That poses a problem to existing search algorithms: Such compounds could be of high interest for a search request, but how can be examined whether a compound comprises a given lexeme? A string match can be considered as an indication, but does not prove semantic relation. The same problem is faced when using lexicon based approaches where signal words are defined as lexemes only and need to be identified in all forms of appearance, and hence also as component of a compound. This paper explores the characteristics of compounds and their constituent elements for German, and compares seven algorithms with regard to runtime and error rates. The results of this study are relevant to query analysis and term weighting approaches in information retrieval system design.

Keywords:
Information Retrieval, Compound, Stemming, Semantic relation


"Official" electronic version of the publication (accessed through its Digital Object Identifier - DOI)
http://dx.doi.org/10.1145/2362456.2362463

Electronic version of the publication:
http://publik.tuwien.ac.at/files/PubDat_211988.pdf


Created from the Publication Database of the Vienna University of Technology.