Random Forest Classification


Random forest learning algorithm.



  • Data

    A data set

  • Preprocessor

    Preprocessed data


  • Learner

    A random forest learning algorithm with settings as specified in the dialog.

  • Random Forest Classifier

    A trained classifier.


Random forest is a classification technique proposed by (Breiman, 2001). When given a set of class-labeled data, Random Forest builds a set of classification trees. Each tree is developed from a bootstrap sample from the training data. When developing individual trees, an arbitrary subset of attributes is drawn (hence the term “random”), from which the best attribute for the split is selected. Classification is based on the majority vote from individually developed tree classifiers in the forest.

  1. Specify the name of the learner or classifier. The default name is “Random Forest Classification”.
  2. Specify how many classification trees will be included in the forest (Number of trees in the forest), and how many attributes will be arbitrarily drawn for consideration at each node. If the latter is not specified (option Consider a number... left unchecked), this number is equal to the square root of the number of attributes in the data.
  3. Original Brieman’s proposal is to grow the trees without any pre-prunning, but since pre-pruning often works quite well and is faster, the user can set the depth to which the trees will be grown (Limit depth of individual trees). Another pre-pruning option is to select the smallest subset that can be split (Do not split subsets smaller than)
  4. Produce a report.
  5. Click Apply to communicate the changes to other widgets. Alternatively, tick the box on the left side of the Apply button and changes will be communicated automatically.


The example below shows a comparison schema of a random forest and a tree learner on a specific data set.



Breiman, L. (2001). Random Forests. In Machine Learning, 45(1), 5-32. Available here