This is documentation for Orange 2.7. For the latest documentation, see Orange 3.

Tuning (tuning)

Wrappers for Tuning Parameters and Thresholds

Classes for two very useful purposes: tuning learning algorithm’s parameters using internal validation and tuning the threshold for classification into positive class.

Tuning parameters

Two classes support tuning parameters. Tune1Parameter for fitting a single parameter and TuneMParameters fitting multiple parameters at once, trying all possible combinations. When called with data and, optionally, id of meta attribute with weights, they find the optimal setting of arguments using cross validation. The classes can also be used as ordinary learning algorithms - they are in fact derived from Learner.

Both classes have a common parent, TuneParameters, and a few common attributes.

class Orange.tuning.TuneParameters
data

Data table with either discrete or continuous features

weight_id

The id of the weight meta attribute

learner

The learning algorithm whose parameters are to be tuned. This can be, for instance, Orange.classification.tree.TreeLearner.

evaluate

The statistics to evaluate. The default is Orange.evaluation.scoring.CA, so the learner will be fit for the optimal classification accuracy. You can replace it with, for instance, Orange.evaluation.scoring.AUC to optimize the AUC. Statistics can return either a single value (classification accuracy), a list with a single value (this is what Orange.evaluation.scoring.CA actually does), or arbitrary objects which the compare function below must be able to compare.

folds

The number of folds used in internal cross-validation. Default is 5.

compare

The function used to compare the results. The function should accept two arguments (e.g. two classification accuracies, AUCs or whatever the result of evaluate is) and return a positive value if the first argument is better, 0 if they are equal and a negative value if the first is worse than the second. The default compare function is cmp. You don’t need to change this if evaluate is such that higher values mean a better classifier.

return_what

Decides what should be result of tuning. Possible values are:

  • TuneParameters.RETURN_NONE (or 0): tuning will return nothing,
  • TuneParameters.RETURN_PARAMETERS (or 1): return the optimal value(s) of parameter(s),
  • TuneParameters.RETURN_LEARNER (or 2): return the learner set to optimal parameters,
  • TuneParameters.RETURN_CLASSIFIER (or 3): return a classifier trained with the optimal parameters on the entire data set. This is the default setting.

Regardless of this, the learner (given as parameter learner) is left set to the optimal parameters.

verbose

If 0 (default), the class doesn’t print anything. If set to 1, it will print out the optimal value found, if set to 2, it will print out all tried values and the related

If tuner returns the classifier, it behaves as a learning algorithm. As the examples below will demonstrate, it can be called, given the data and the result is a “trained” classifier. It can, for instance, be used in cross-validation.

Out of these attributes, the only necessary argument is learner. The real tuning classes (subclasses of this class) add two additional - the attributes that tell what parameter(s) to optimize and which values to use.

class Orange.tuning.Tune1Parameter

Class Orange.optimization.Tune1Parameter tunes a single parameter.

parameter

The name of the parameter (or a list of names, if the same parameter is stored at multiple places - see the examples) to be tuned.

values

A list of parameter’s values to be tried.

To show how it works, we shall fit the minimal number of examples in a leaf for a tree classifier.

part of optimization-tuning1.py

learner = Orange.classification.tree.TreeLearner()
voting = Orange.data.Table("voting")
tuner = Orange.tuning.Tune1Parameter(learner=learner,
                           parameter="min_subset",
                           values=[1, 2, 3, 4, 5, 10, 15, 20],
                           evaluate=Orange.evaluation.scoring.AUC, verbose=2)
classifier = tuner(voting)

print "Optimal setting: ", learner.min_subset

Set up like this, when the tuner is called, set learner.min_subset to 1, 2, 3, 4, 5, 10, 15 and 20, and measure the AUC in 5-fold cross validation. It will then reset the learner.minSubset to the optimal value found and, since we left return_what at the default (RETURN_CLASSIFIER), construct and return the classifier from the entire data set. So, what we get is a classifier, but if we’d also like to know what the optimal value was, we can get it from learner.min_subset.

Tuning is of course not limited to setting numeric parameters. You can, for instance, try to find the optimal criteria for assessing the quality of attributes by tuning parameter="measure", trying settings like values=[Orange.feature.scoring.GainRatio(), Orange.feature.scoring.Gini()]

Since the tuner returns a classifier and thus behaves like a learner, it can be used in a cross-validation. Let us see whether a tuning tree indeed enhances the AUC or not. We shall reuse the tuner from above, add another tree learner, and test them both.

part of optimization-tuning1.py

untuned = Orange.classification.tree.TreeLearner()
res = Orange.evaluation.testing.cross_validation([untuned, tuner], voting)
AUCs = Orange.evaluation.scoring.AUC(res)

print "Untuned tree: %5.3f" % AUCs[0]
print "Tuned tree: %5.3f" % AUCs[1]

This can be time consuming: for each of 8 values for min_subset it will perform 5-fold cross validation inside a 10-fold cross validation - altogether 400 trees. Plus, it will learn the optimal tree afterwards for each fold. Adding a tree without tuning, that makes 420 trees build in total.

Nevertheless, results are good:

Untuned tree: 0.930
Tuned tree: 0.986
class Orange.tuning.TuneMParameters

The use of Orange.optimization.TuneMParameters differs from Orange.optimization.Tune1Parameter only in specification of tuning parameters.

parameters

A list of two-element tuples, each containing the name of a parameter and its possible values.

For example we can try to tune both the minimal number of instances in leaves and the splitting criteria by setting the tuner as follows:

optimization-tuningm.py

import Orange

learner = Orange.classification.tree.TreeLearner()
voting = Orange.data.Table("voting")
tuner = Orange.tuning.TuneMParameters(learner=learner,
             parameters=[("min_subset", [2, 5, 10, 20]),
                         ("measure", [Orange.feature.scoring.GainRatio(), 
                                      Orange.feature.scoring.Gini()])],
             evaluate = Orange.evaluation.scoring.AUC)

classifier = tuner(voting)

Setting Optimal Thresholds

Some models may perform well in terms of AUC which measures the ability to distinguish between instances of two classes, but have low classifications accuracies. The reason may be in the threshold: in binary problems, classifiers usually classify into the more probable class, while sometimes, when class distributions are highly skewed, a modified threshold would give better accuracies. Here are two classes that can help.

class Orange.tuning.ThresholdLearner(learner=None, store_curve=False, **kwds)

Orange.optimization.ThresholdLearner is a class that wraps another learner. When given the data, it calls the wrapped learner to build a classifier, than it uses the classifier to predict the class probabilities on the training instances. Storing the probabilities, it computes the threshold that would give the optimal classification accuracy. Then it wraps the classifier and the threshold into an instance of Orange.optimization.ThresholdClassifier.

Note that the learner doesn’t perform internal cross-validation. Also, the learner doesn’t work for multivalued classes.

Orange.optimization.ThresholdLearner has the same interface as any learner: if the constructor is given data, it returns a classifier, else it returns a learner. It has two attributes.

learner

The wrapped learner, for example an instance of Orange.classification.bayes.NaiveLearner.

store_curve

If True, the resulting classifier will contain an attribute curve, with a list of tuples containing thresholds and classification accuracies at that threshold (default False).

class Orange.tuning.ThresholdClassifier(classifier, threshold, **kwds)

Orange.optimization.ThresholdClassifier, used by both Orange.optimization.ThredholdLearner and Orange.optimization.ThresholdLearner_fixed is therefore another wrapper class, containing a classifier and a threshold. When it needs to classify an instance, it calls the wrapped classifier to predict probabilities. The example will be classified into the second class only if the probability of that class is above the threshold.

classifier

The wrapped classifier, normally the one related to the ThresholdLearner’s learner, e.g. an instance of Orange.classification.bayes.NaiveLearner.

threshold

The threshold for classification into the second class.

The two attributes can be specified set as attributes or given to the constructor as ordinary arguments.

Examples

This is how you use the learner.

part of optimization-thresholding1.py

import Orange

bupa = Orange.data.Table("bupa")

learner = Orange.classification.bayes.NaiveLearner()
thresh = Orange.tuning.ThresholdLearner(learner=learner)
thresh80 = Orange.tuning.ThresholdLearner_fixed(learner=learner,
                                                      threshold=0.8)
res = Orange.evaluation.testing.cross_validation([learner, thresh, thresh80], bupa)
CAs = Orange.evaluation.scoring.CA(res)

print "W/out threshold adjustement: %5.3f" % CAs[0]
print "With adjusted thredhold: %5.3f" % CAs[1]
print "With threshold at 0.80: %5.3f" % CAs[2]

The output:

W/out threshold adjustement: 0.633
With adjusted thredhold: 0.659
With threshold at 0.80: 0.449

part of optimization-thresholding2.py

import Orange

bupa = Orange.data.Table("bupa")
ri2 = Orange.data.sample.SubsetIndices2(bupa, 0.7)
train = bupa.select(ri2, 0)
test = bupa.select(ri2, 1)

bayes = Orange.classification.bayes.NaiveLearner(train)

thresholds = [.2, .5, .8]
models = [Orange.tuning.ThresholdClassifier(bayes, thr) for thr in thresholds]

res = Orange.evaluation.testing.test_on_data(models, test)
cm = Orange.evaluation.scoring.confusion_matrices(res)

print
for i, thr in enumerate(thresholds):
    print "%1.2f: TP %5.3f, TN %5.3f" % (thr, cm[i].TP, cm[i].TN)

The script first divides the data into training and testing subsets. It trains a naive Bayesian classifier and than wraps it into ThresholdClassifiers with thresholds of .2, .5 and .8. The three models are tested on the left-out data, and we compute the confusion matrices from the results. The printout:

0.20: TP 60.000, TN 1.000
0.50: TP 42.000, TN 24.000
0.80: TP 2.000, TN 43.000

shows how the varying threshold changes the balance between the number of true positives and negatives.

class Orange.tuning.PreprocessedLearner(preprocessor=None, learner=None)