imagination tasks, and feedback represents the feedback period)
average of signals acquired from F3, T7, P3 and Cz from the signal acquired from C3 and similarly, average of F4, Cz, P4 and T8 from C4。 From existing literature [20],
where ek is the one-step prediction error,
it is clear that relevant MI information is obtained from 8–12 Hz (μ-rhythm) and 16–24 Hz (central-β rhythm)。 Thus, the spatially filtered EEG signals are temporally filtered in the frequency range of 8–25 Hz。 For this pur- pose, we have designed an IIR elliptical filter of order 6, pass-band attenuation of 1 dB and stop-band attenuation of 50 dB。 The merit of selecting elliptical filter lies in its good frequency-domain characteristics of sharp roll-off, and independent control over the pass-band and stop-band ripples。论文网
The features selected in this paper are the adaptive auto- regressive parameters of the EEG from each electrode。 Adaptive auto-regressive (AAR) model takes into account the non-stationary behavior of a signal by varying the auto- regressive (AR) parameters with time。 An adaptive auto- regressive model of order p, AAR(p), is described as,
the Kalman gain vector, ˙I is the identity matrix, X�k = 。xk−1, xk−2, 。。 。, xk−p。T is a vector of the values of the past samples, aˆ k = 。aˆ 1,k , aˆ 2,k , 。。 。, aˆ p,k 。T is a vector of
the AAR parameters, and [。]T is vector transpose。 Further details on AAR with Kalman filter as estimator are dis- cussed in [24]。 In the present context, experiments under- taken reveal that an AAR model of order 6 and update coefficient = 0。0085 discriminates the MI tasks effec- tively。 Further, the feature vectors are normalized in the range [−1, 1]。 The final dimensions of the feature vector (for every session) are 200 trials × 2 electrodes × 6 AAR coefficients。
The feature vector thus prepared is then employed to train an AdaBoost classifier to produce the four MI states as outputs。 AdaBoost, developed by Freund and Schap- ire [13], is the most influential boosting algorithm。 In this family of ensemble technique, first, the weights of each
training datum are uniform。 After each iteration, the eas-
ily classified patterns are assigned lower weights and the difficult patterns are assigned higher weights, thus increas- ing the focus of the learners toward the difficult ones。 After every iterations, the base learners prepare a new prediction rule and after N iterations, N prediction rules are prepared
where x(k) is the kth sample of the series under observa- tion, η(k) is the zero-mean-Gaussian noise with variance, ση(k)2, N is a function corresponding to the noise and ai(k) are the time-varying AR coefficients。 As noted from (1), a current sample is predicted by past p samples and the new information introduced by the noise。 Thus, η(k) is also
called the innovation process。 There are several algorithms which are used for estimation of the AAR coefficients like least-mean-square (LMS) method, recursive-least-square (RLS) method, recursive AR (RAR) method and Kalman filtering [24]。
In this study, we have employed the Kalman filter as the estimation algorithm, which can be summarized by the fol- lowing equations:
to construct the final distance discriminant, by which the unknown patterns can be recognized。 The final prediction rule is equal to the weighted majority vote of all predic- tors, and the final accuracy of the classifier is effectively boosted。
For our study, we have employed Support Vector Machine (SVM) [1, 9] as the base learner。 An SVM clas- sifier maps the input vectors in high-dimensional feature space through some nonlinear mapping which can easily separate the data point vectors of two classes by a hyper- plane of maximum margin。