Go Back Research Article June, 2011

QUANTIFYING THE FINEST SIMILARITY FOR CASE BASED REASONING TO IMPLEMENT WORD SENSE DISAMBIGUATION USING DIFFERENT LEARNING CLASSIFIERS

Abstract

In case base reasoning, solution for the current situation is derived using already existing similar cases. In this paper, case based approach is handled with bigram features get hold of pre-bigram and post-bigram, to identify the sense of a word in English language. Feature size of input and cases are taken as two, because of bigram features. Major task in the disambiguation process is text feature vectorization. Instead of considering the features as text, they are construed as vector form. To collect the similar cases of the ambiguous words, different distance measuring functions are used. To select the best case from the collected cases, that is, to make disambiguation, three different techniques, KNearest neighboring method, Baye’s method artificial neural network are used. Among these, Baye’s produced outstanding performance with 84.76% of disambiguation accuracy with pre-bigram.

Keywords

word sense disambiguation pre-bigram post-bigram similarity function case based reasoning (cbr) part-of-speech (pos) k-nearest neighboring method (knn) artificial neural network (ann) baye’s method .
Document Preview
Download PDF
Details
Volume 1
Issue 1
Pages 49- 56
ISSN 2248-9312