In machine learning, ranking is the problem of imposing an ordering on a set of instances. A classic way of addressing this way has been through the use of decision trees as probability estimation trees (PETs). Decision trees assign a class to each instance. Probability estimation trees go further, calculating each instance's probability of belonging to a class. The instances can then be ranked by their probabilities.

However, PETs do not always produce good rankings, as the algorithms used for constructing decision trees are aimed at producing good classifications, not good rankings. Specifically, ties are frequent, and unequal distribution of instances can result in misleading predictions.

We are working on building better PETs that address these problems, taking each node's history into account and using a Gaussian kernel to smooth the predictions.