Fusion of Direct Probabilistic Multi-Class Support Vector Machines to Enhance Mental Tasks Recognition Performance in BCI Systems

Abstract
Support vector machines (SVM) are powerful discriminate models that were originally created only to solve dichotomous problems. Their extension to multi-category problems is always an active research issue. The proposed methods can roughly be classified into two strategies: the indirect strategy which subdivides the multiclass problem into a set of bi-class sub-problems, and the direct strategy which attempts to build a multi-class SVM (M-SVM) by resolving a global optimization problem. In this study, we firstly implement and compare four M-SVM models: Weston and Watkins, Crammer and Singer, Lee Lin and wahba, and Quadratic Loss M-SVM. The four models work separately, the outputs of each one are calibrated confidence measures and aim to output probability estimates of five disctinct mental tasks. We secondly propose to average the outputs of the four M-SVM to exploit the advantages of each model and consequently approve the classification decision. The study demonstrates that all M-SVM produce approximately a similar accuracy. However Crammer and Singer model appears more accurate for mental tasks recognition with an average accuracy between 74.95% and 93.3%. Also, a significant improvement in the classification accuracy is obtained by the fusion of M-SVM outputs. Copyright © 2018 Praise Worthy Prize - All rights reserved.