Algorithms for the Verification of the Semantic Relation Between a Compound and a Given Lexeme

Gudrun Kellner
Johannes Grünauer
Proceedings contribution on CD
Text mining on a lexical basis is quite well developed for the English language. In compounding languages, however, lexicalized words are often a combination of two or more semantic units. New words can be built easily by concatenating existing ones, without putting any white spaces in between. <br> That poses a problem to existing search algorithms: Such compounds could be of high interest for a search request, but how can be examined whether a compound comprises a given lexeme? A string match can be considered as an indication, but does not prove semantic relation. The same problem is faced when using lexicon based approaches where signal words are defined as lexemes only and need to be identified in all forms of appearance, and hence also as component of a compound. This paper explores the characteristics of compounds and their constituent elements for German, and compares seven algorithms with regard to runtime and error rates. The results of this study are relevant to query analysis and term weighting approaches in information retrieval system design.
TU Focus: 
Computational Science and Engineering

G. Kellner, J. Grünauer:
"Algorithms for the Verification of the Semantic Relation Between a Compound and a Given Lexeme";
in: "i-KNOW '12: Proceedings of the 12th International Conference on Knowledge Management and Knowledge Technologies", herausgegeben von: ACM; ACM Press, New York, 2012, ISBN: 978-1-4503-1242-4, Paper-Nr. 5, 8 S.

Zusätzliche Informationen

Last changed: 
05.12.2012 12:11:36
TU Id: 
Department Focus: 
Business Informatics
Abstract German: 
Author List: 
G. Kellner, J. Grünauer