yat  0.16pre
Public Member Functions | List of all members
theplu::yat::classifier::SVM Class Reference

Support Vector Machine. More...

#include <yat/classifier/SVM.h>

Public Member Functions

 SVM (void)
 SVM (const SVM &)
 Copy constructor.
virtual ~SVM ()
SVMmake_classifier (void) const
 Create an untrained copy of SVM. More...
const utility::Vectoralpha (void) const
double C (void) const
unsigned long int max_epochs (void) const
void max_epochs (unsigned long int)
 set maximal number of epochs in training
const theplu::yat::utility::Vectoroutput (void) const
void predict (const KernelLookup &input, utility::Matrix &predict) const
void reset (void)
 make SVM untrained More...
void set_C (const double)
 sets the C-Parameter
void train (const KernelLookup &kernel, const Target &target)
bool trained (void) const

Detailed Description

Support Vector Machine.

Member Function Documentation

const utility::Vector& theplu::yat::classifier::SVM::alpha ( void  ) const
alpha parameters
double theplu::yat::classifier::SVM::C ( void  ) const

The C-parameter is the balance term (see train()). A very large C means the training will be focused on getting samples correctly classified, with risk for overfitting and poor generalisation. A too small C will result in a training, in which misclassifications are not penalized. C is weighted with respect to the size such that $ n_+C_+ = n_-C_- $, meaning a misclassificaion of the smaller group is penalized harder. This balance is equivalent to the one occuring for regression with regularisation, or ANN-training with a weight-decay term. Default is C set to infinity.

mean of vector $ C_i $
SVM* theplu::yat::classifier::SVM::make_classifier ( void  ) const

Create an untrained copy of SVM.

A dynamically allocated SVM, which has to be deleted by the caller to avoid memory leaks.
unsigned long int theplu::yat::classifier::SVM::max_epochs ( void  ) const

Default is max_epochs set to 100,000.

number of maximal epochs
const theplu::yat::utility::Vector& theplu::yat::classifier::SVM::output ( void  ) const

The output is calculated as $ o_i = \sum \alpha_j t_j K_{ij} + bias $, where $ t $ is the target.

output of training samples
void theplu::yat::classifier::SVM::predict ( const KernelLookup input,
utility::Matrix predict 
) const

Generate prediction predict from input. The prediction is calculated as the output times the margin, i.e., geometric distance from decision hyperplane: $ \frac{ \sum \alpha_j t_j K_{ij} + bias}{|w|} $ The output has 2 rows. The first row is for binary target true, and the second is for binary target false. The second row is superfluous as it is the first row negated. It exist just to be aligned with multi-class SupervisedClassifiers. Each column in input and output corresponds to a sample to predict. Each row in input corresponds to a training sample, and more exactly row i in input should correspond to row i in KernelLookup that was used for training.

void theplu::yat::classifier::SVM::reset ( void  )

make SVM untrained

Setting variable trained to false; other variables are undefined.

New in yat 0.6
void theplu::yat::classifier::SVM::train ( const KernelLookup kernel,
const Target target 

Training the SVM following Platt's SMO, with Keerti's modifacation. Minimizing $ \frac{1}{2}\sum y_iy_j\alpha_i\alpha_j(K_{ij}+\frac{1}{C_i}\delta_{ij}) - \sum \alpha_i$, which corresponds to minimizing $ \sum w_i^2+\sum C_i\xi_i^2 $.

If the training problem is not linearly separable and C is set to infinity, the minima will be located in the infinity, and thus the minimum will not be reached within the maximal number of epochs. More exactly, when the problem is not linearly separable, there exists an eigenvector to $ H_{ij}=y_iy_jK_{ij} $ within the space defined by the conditions: $ \alpha_i>0 $ and $ \sum \alpha_i y_i = 0 $. As the eigenvalue is zero in this direction the quadratic term does not contribute to the objective, but the objective only consists of the linear term and hence there is no minumum. This problem only occurs when $ C $ is set to infinity because for a finite $ C $ all eigenvalues are finite. However, for a large $ C $ (and training problem is non-linearly separable) there exists an eigenvector corresponding to a small eigenvalue, which means the minima has moved from infinity to "very far away". In practice this will also result in that the minima is not reached withing the maximal number of epochs and the of $ C $ should be decreased.

Class for SVM using Keerthi's second modification of Platt's Sequential Minimal Optimization. The SVM uses all data given for training.

utility::runtime_errorif maximal number of epoch is reach.
bool theplu::yat::classifier::SVM::trained ( void  ) const
true if SVM is trained
New in yat 0.6

The documentation for this class was generated from the following file:

Generated on Sat Mar 17 2018 02:33:10 for yat by  doxygen 1.8.11