The Data

This section describes how to load the data in Orange. We also show how to explore the data, perform some basic statistics, and how to sample the data.

Data Input

Orange can read files in native tab-delimited format, or can load data from any of the major standard spreadsheet file type, like CSV and Excel. Native format starts with a header row with feature (column) names. Second header row gives the attribute type, which can be continuous, discrete, time, or string. The third header line contains meta information to identify dependent features (class), irrelevant features (ignore) or meta features (meta). More detailed specification is available in Loading and saving data (io). Here are the first few lines from a data set

age       prescription  astigmatic    tear_rate     lenses
discrete  discrete      discrete      discrete      discrete
young     myope         no            reduced       none
young     myope         no            normal        soft
young     myope         yes           reduced       none
young     myope         yes           normal        hard
young     hypermetrope  no            reduced       none

Values are tab-limited. This data set has four attributes (age of the patient, spectacle prescription, notion on astigmatism, and information on tear production rate) and an associated three-valued dependent variable encoding lens prescription for the patient (hard contact lenses, soft contact lenses, no lenses). Feature descriptions could use one letter only, so the header of this data set could also read:

age       prescription  astigmatic    tear_rate     lenses
d         d             d             d             d

The rest of the table gives the data. Note that there are 5 instances in our table above. For the full data set, check out or download to a target directory. You can also skip this step as Orange comes preloaded with several demo data sets, lenses being one of them. Now, open a python shell, import Orange and load the data:

>>> import Orange
>>> data ="lenses")

Note that for the file name no suffix is needed; as Orange checks if any files in the current directory are of the readable type. The call to creates an object called data that holds your data set and information about the lenses domain:

>>> data.domain.attributes
(DiscreteVariable('age', values=['pre-presbyopic', 'presbyopic', 'young']),
 DiscreteVariable('prescription', values=['hypermetrope', 'myope']),
 DiscreteVariable('astigmatic', values=['no', 'yes']),
 DiscreteVariable('tear_rate', values=['normal', 'reduced']))
>>> data.domain.class_var
DiscreteVariable('lenses', values=['hard', 'none', 'soft'])
>>> for d in data[:3]:
   ...:     print(d)
[young, myope, no, reduced | none]
[young, myope, no, normal | soft]
[young, myope, yes, reduced | none]

The following script wraps-up everything we have done so far and lists first 5 data instances with soft perscription:

import Orange
data ="lenses")
print("Attributes:", ", ".join( for x in data.domain.attributes))
print("Data instances", len(data))

target = "soft"
print("Data instances with %s prescriptions:" % target)
atts = data.domain.attributes
for d in data:
    if d.get_class() == target:
        print(" ".join(["%14s" % str(d[a]) for a in atts]))

Note that data is an object that holds both the data and information on the domain. We show above how to access attribute and class names, but there is much more information there, including that on feature type, set of values for categorical features, and other.

Saving the Data

Data objects can be saved to a file:


This time, we have to provide the file extension to specify the output format. An extension for native Orange’s data format is ”.tab”. The following code saves only the data items with myope perscription:

import Orange
data ="lenses")
myope_subset = [d for d in data if d["prescription"] == "myope"]
new_data =, myope_subset)"")

We have created a new data table by passing the information on the structure of the data (data.domain) and a subset of data instances.

Exploration of the Data Domain

Data table stores information on data instances as well as on data domain. Domain holds the names of attributes, optional classes, their types and, and if categorical, the value names. The following code:

import Orange

data ="")
n = len(data.domain.attributes)
n_cont = sum(1 for a in data.domain.attributes if a.is_continuous)
n_disc = sum(1 for a in data.domain.attributes if a.is_discrete)
print("%d attributes: %d continuous, %d discrete" % (n, n_cont, n_disc))

print("First three attributes:",
      ", ".join(data.domain.attributes[i].name for i in range(3)))



25 attributes: 14 continuous, 11 discrete
First three attributes: symboling, normalized-losses, make
Class: price

Orange’s objects often behave like Python lists and dictionaries, and can be indexed or accessed through feature names:

print("First attribute:", data.domain[0].name)
name = "fuel-type"
print("Values of attribute '%s': %s" %
      (name, ", ".join(data.domain[name].values)))

The output of the above code is:

First attribute: symboling
Values of attribute 'fuel-type': diesel, gas

Data Instances

Data table stores data instances (or examples). These can be index or traversed as any Python list. Data instances can be considered as vectors, accessed through element index, or through feature name.

import Orange

data ="iris")
print("First three data instances:")
for d in data[:3]:

print("25-th data instance:")

name = "sepal width"
print("Value of '%s' for the first instance:" % name, data[0][name])
print("The 3rd value of the 25th data instance:", data[24][2])

The script above displays the following output:

First three data instances:
[5.100, 3.500, 1.400, 0.200 | Iris-setosa]
[4.900, 3.000, 1.400, 0.200 | Iris-setosa]
[4.700, 3.200, 1.300, 0.200 | Iris-setosa]
25-th data instance:
[4.800, 3.400, 1.900, 0.200 | Iris-setosa]
Value of 'sepal width' for the first instance: 3.500
The 3rd value of the 25th data instance: 1.900

Iris data set we have used above has four continous attributes. Here’s a script that computes their mean:

average = lambda x: sum(x)/len(x)

data ="iris")
print("%-15s %s" % ("Feature", "Mean"))
for x in data.domain.attributes:
    print("%-15s %.2f" % (, average([d[x] for d in data])))

Above also illustrates indexing of data instances with objects that store features; in d[x] variable x is an Orange object. Here’s the output:

Feature         Mean
sepal length    5.84
sepal width     3.05
petal length    3.76
petal width     1.20

A slightly more complicated, but more interesting is a code that computes per-class averages:

average = lambda xs: sum(xs)/float(len(xs))

data ="iris")
targets = data.domain.class_var.values
print("%-15s %s" % ("Attribute", " ".join("%15s" % c for c in targets)))
for a in data.domain.attributes:
    dist = ["%15.2f" % average([d[a] for d in data if d.get_class() == c])
            for c in targets]
    print("%-15s" %, " ".join(dist))

Of the four features, petal width and length look quite discriminative for the type of iris:

Feature             Iris-setosa Iris-versicolor  Iris-virginica
sepal length               5.01            5.94            6.59
sepal width                3.42            2.77            2.97
petal length               1.46            4.26            5.55
petal width                0.24            1.33            2.03

Finally, here is a quick code that computes the class distribution for another data set:

import Orange
from collections import Counter

data ="lenses")
print(Counter(str(d.get_class()) for d in data))

Orange Data Sets and NumPy

Orange data sets are actually wrapped NumPy arrays. Wrapping is performed to retain the information about the feature names and values, and NumPy arrays are used for speed and compatibility with different machine learning toolboxes, like scikit-learn, on which Orange relies. Let us display the values of these arrays for the first three data instances of the iris data set:

>>> data ="iris")
>>> data.X[:3]
array([[ 5.1,  3.5,  1.4,  0.2],
       [ 4.9,  3. ,  1.4,  0.2],
       [ 4.7,  3.2,  1.3,  0.2]])
>>> data.Y[:3]
array([ 0.,  0.,  0.])

Notice that we access the arrays for attributes and class separately, using data.X and data.Y. Average values of attributes can then be computed efficiently by:

>>> import np as numpy
>>> np.mean(data.X, axis=0)
array([ 5.84333333,  3.054     ,  3.75866667,  1.19866667])

We can also construct a (classless) data set from a numpy array:

>>> X = np.array([[1,2], [4,5]])
>>> data =
>>> data.domain
[Feature 1, Feature 2]

If we want to provide meaninful names to attributes, we need to construct an appropriate data domain:

>>> domain =["lenght"),
>>> data =, X)
>>> data.domain
[lenght, width]

Here is another example, this time with construction of data set that includes a numerical class and different type of attributes:

size ="size", ["small", "big"])
height ="height")
shape ="shape", ["circle", "square", "oval"])
speed ="speed")

domain =[size, height, shape], speed)

X = np.array([[1, 3.4, 0], [0, 2.7, 2], [1, 1.4, 1]])
Y = np.array([42.0, 52.2, 13.4])

data =, X, Y)

Running of this scripts yields:

[[big, 3.400, circle | 42.000],
 [small, 2.700, oval | 52.200],
 [big, 1.400, square | 13.400]

Meta Attributes

Often, we wish to include descriptive fields in the data that will not be used in any computation (distance estimation, modeling), but will serve for identification or additional information. These are called meta attributes, and are marked with meta in the third header row:

name	gender	math	history	biology
s	d	c	c	c
meta	meta
jane	f	10	7	8
bill	m	7	6	10
ian	m	8	10	10

Values of meta attributes and all other (non-meta) attributes are treated similarly in Orange, but stored in the separate numpy arrays:

>>> data ="student-grades")
>>> data[0]["name"]
>>> data[0]["biology"]
>>> for d in data:
    ...:     print("{}/{}: {}".format(d["name"], d["gender"], d["math"]))
jane/f: 10.000
bill/m: 7.000
ian/m: 8.000
>>> data.X
array([[ 10.,   7.,   8.],
       [  7.,   6.,  10.],
       [  8.,  10.,  10.]])
>>> data.metas
array([['jane', 0.0],
       ['bill', 1.0],
       ['ian', 1.0]], dtype=object)

Meta attributes may be passed to after providing arrays for attribute and class values:

..  literalinclude:: code/

The script outputs:

[[2.200, 1625.000 | no] {houston, 10},
 [0.300, 163.000 | yes] {ljubljana, -1}

To construct a classless domain we could pass None for the class values.

Missing Values

Consider the following exploration of the data set on votes of the US senate:

>>> import numpy as np
>>> data ="")
>>> data[2]
[?, y, y, ?, y, ... | democrat]
>>> np.isnan(data[2][0])
>>> np.isnan(data[2][1])

The particular data instance included missing data (represented with ‘?’) for the first and the fourth attribute. In the original data set file, the missing values are, by default, represented with a blank space. We can now examine each attribute and report on proportion of data instances for which this feature was undefined:

data ="")
for x in data.domain.attributes:
    n_miss = sum(1 for d in data if np.isnan(d[x]))
    print("%4.1f%% %s" % (100.*n_miss/len(data),

First three lines of the output of this script are:

 2.8% handicapped-infants
11.0% water-project-cost-sharing
 2.5% adoption-of-the-budget-resolution

A single-liner that reports on number of data instances with at least one missing value is:

>>> sum(any(np.isnan(d[x]) for x in data.domain.attributes) for d in data)

Data Selection and Sampling

Besides the name of the data file, can accept the data domain and a list of data items and returns a new data set. This is useful for any data subsetting:

data ="")
print("Data set instances:", len(data))
subset =,
                           [d for d in data if d["petal length"] > 3.0])
print("Subset size:", len(subset))

The code outputs:

Data set instances: 150
Subset size: 99

and inherits the data description (domain) from the original data set. Changing the domain requires setting up a new domain descriptor. This feature is useful for any kind of feature selection:

data ="")
new_domain =[:2]),
new_data =, data)


We could also construct a random sample of the data set:

>>> sample =, random.sample(data, 3))
>>> sample
[[6.000, 2.200, 4.000, 1.000 | Iris-versicolor],
 [4.800, 3.100, 1.600, 0.200 | Iris-setosa],
 [6.300, 3.400, 5.600, 2.400 | Iris-virginica]

or randomly sample the attributes:

>>> atts = random.sample(data.domain.attributes, 2)
>>> domain =, data.domain.class_var)
>>> new_data =, data)
>>> new_data[0]
[5.100, 1.400 | Iris-setosa]