This is documentation for Orange 2.7. For the latest documentation, see Orange 3.

SVM Learner

Support vector machine learner (classification)

Signals

Inputs:

  • Examples

    A table with training examples

Outputs:

  • Learner

    The support vector machine learning algorithm with settings as specified in the dialog.

  • Classifier

    Trained SVM classifier

  • Support Vectors

    A subset of data instances from the training set that were used as support vectors in the trained classifier

Description

Support vector machines (SVM) is a popular classification technique that will construct a separating hyperplane in the attribute space which maximizes the margin between the instances of different classes. The technique often yields supreme predictive performance results. Orange embeds a popular implementation of SVM in LIBSVM package, and this widget provides for a graphical user interface to its functionality. It also behaves like a typical Orange learner widget: on its output, it presents an object that can learn and is initialized with the setting specified in the widget, or, given the input data set, also a classifier that can be used to predict classes and class probabilities given a set of new examples.

Support vector machines widget

Learner can be given a name under which it will appear in other widgets that use its output, say, in Test Learners. The default name is simply “SVM”.

The next block of options deals with kernel, that is, a function that transforms attribute space to a new feature space to fit the maximum-margin hyperplane, thus allowing the algorithm to create non-linear classifiers. The first kernel in the list, however, is a Linear kernel that does not require this trick, but all the others (Polynomial, RBF and Sigmoid) do. Specific functions that specify the kernel are presented besides their names, and the constants involved:

g for the gamma constant in kernel function (the recommended value is 1/k, where k is the number of the attributes, but since there may be no training set given to the widget the default is 0 and the user has to set this option manually), d) for the degree of the kernel (default 3), and c for the constant c0 in the kernel function (default 0).

Options control other aspects of the SVM learner. Model complexity (C) (penalty parameter), Tolerance (p) and Numeric precision (eps) are options that define the optimization function; see LIBSVM for further details. The other three options are used to instruct the learner to prepare the classifier such that it would estimate the class probability values (Estimate class probabilities), constrain the number of the support vectors which define the maximum-margin hyperplane (Limit the number of support vectors) and normalize the training and later the test data (Normalize data). The later somehow slows down the learner, but may be essential in achieving better classification performance.

The last button in the SVM dialog is Automatic parameter search. This is enabled when the widget is given a data set, and uses LIBSVM‘s procedures to search for the optimal value of learning parameters. Upon completion, the values of the parameters in the SVM dialog box are set to the parameters found by the procedure.

Examples

There are two typical uses of this widget, one that uses it as a classifier and the other one that uses it to construct an object for learning. For the first one, we have split the data set to two data sets (Sample and Remaining Examples). The sample was sent to SVM which produced a Classifier, that was then used in Predictions widget to classify the data in Remaning Examples. A similar schema can be used if the data would be already separated in two different files; in this case, two File widgets would be used instead of the File - Data Sampler combination.

SVM - a schema with a classifier

The second schema shows how to use the SVM widget to construct the learner and compare it in cross-validation with Majority Learner and k-Nearest Neighbours Learner learners.

SVM and other learners compared by cross-validation

The following schema observes a set of support vectors in a Scatter Plot visualization.

Visualization of support vectors

For the above schema to work correctly, the channel between SVM Learner and Scatter Plot widget has to be set appropriately. Set the channel between these two widgets by double-clinking on the green edge between the widgets, and use the settings as displayed in the dialog below.

Channel setting for communication of support vectors