<< Chapter < Page Chapter >> Page >

Svm

SVM involves three phases: a training phase, cross-validation phase, and testing phase. The training phase learns a model by which to classify new test examples. The cross-validation phase is used to choose parameters of the classifier that will result in the highest accuracy of the classifier.[1] The test phase applies the model with the chosen parameters to classify the test examples. The 150 images of our dataset were divided into 60% training examples, 20% cross-validation examples, and 20% test examples.

Training phase

The training phase takes two inputs: a feature matrix containing the values of each of the 12 features for each training example, and a label vector containing an integer representing which of the 5 classes each training example belongs to. Each training example exists at a particular location in the 12-dimensional “feature space.” The algorithm learns a model of boundaries that best separates the 5 classes of training examples in this feature space. The boundaries that are selected are the ones with the largest margin between training examples of differing classes. An example of a boundary learned for a two-dimensional feature space is shown in the figure below.

SVM Classifier.

Figure 1. SVM classifier. [2]

Cross-validation phase

SVM has two parameters, C and gamma, that control how smooth the separation boundaries are. C and gamma must be chosen to best prevent overfitting and underfitting of the model. In the cross-validation phase, training of the model is repeated for different combinations of C and gamma. The model is then applied to a cross-validation data set and the classification accuracy of each model is calculated. The values of C and gamma that result in the highest accuracy are selected.

Test phase

In the test phase, the SVM algorithm uses the boundary learned in the training phase to classify new test examples. The input to the test phase is the model from the training phase and the feature matrix for the test examples. The output of the test phase is the predicted class for each test example. The predicted values of the test examples can be compared to the actual values to calculate accuracy of the classifier.

Multiclass classification methods

Two methods of multiclass SVM classification were implemented: 1 vs 1 and 1 vs all.

In the training phase of 1 vs 1 classification, a binary classifier is trained for each pair of classes. For example, one of the classifiers would classify all Class I examples as positive examples and all Class 2 examples as negative examples. During the test phase, each of these classifiers is applied to the test data. For each training example, the class that gets the greatest number of positive classifications is chosen as the predicted class.

In the training phase of 1 vs all classification, a single classifier is trained for each of the classes. For example, one of the classifiers would classify all Class I examples as positive examples and all other examples as negative examples. During the test phase, the classifiers are applied to the test data. A single positive class is returned for each training example and is chosen as the predicted class for that example.

Get Jobilize Job Search Mobile App in your pocket Now!

Get it on Google Play Download on the App Store Now




Source:  OpenStax, Automatic white blood cell classification using svm and neural networks. OpenStax CNX. Dec 16, 2015 Download for free at http://legacy.cnx.org/content/col11924/1.5
Google Play and the Google Play logo are trademarks of Google Inc.

Notification Switch

Would you like to follow the 'Automatic white blood cell classification using svm and neural networks' conversation and receive update notifications?

Ask