QUANTIFYING THE FINEST SIMILARITY FOR CASE BASED REASONING TO IMPLEMENT WORD SENSE DISAMBIGUATION USING DIFFERENT LEARNING CLASSIFIERS
Abstract
In case base reasoning, solution for the current situation is derived using already existing similar cases. In this paper, case based approach is handled with bigram features get hold of pre-bigram and post-bigram, to identify the sense of a word in English language. Feature size of input and cases are taken as two, because of bigram features. Major task in the disambiguation process is text feature vectorization. Instead of considering the features as text, they are construed as vector form. To collect the similar cases of the ambiguous words, different distance measuring functions are used. To select the best case from the collected cases, that is, to make disambiguation, three different techniques, KNearest neighboring method, Baye’s method artificial neural network are used. Among these, Baye’s produced outstanding performance with 84.76% of disambiguation accuracy with pre-bigram.