Abstract
Although support vector machine (SVM) and its variants have been combined successfully with partitioning strategies for multiclass classification, a series of individual classifiers have to be considered separately, which significantly limits the sparseness of the solution. In this work, from an absolutely novel perspective, we proposed a regular simplex support vector machine (RSSVM) for the K-class classification. RSSVM maps the K classes to K vertices of a (K− 1)-dimensional regular simplex so that the K-class classification becomes a (K− 1)-output learning task. We measured the training loss by comparing the square distances between the output point of each sample and the vertices. As a result, we were able to implement RSSVM by constructing a single primal problem with linear inequality constraints, endowing it with excellent sparseness. This method, however, made the computational complexity of RSSVM not only demonstrated excellent sparseness but also superior overall accuracy, efficiency, and scalability.
View more >>