Adding and comparing learners

Once we have specified our benchmark learner, we can specify additional learners for comparison. If simplicity and transparency are goals, we will want to incrementally introduce complexity. For example, for our high school graduation example, we might take the following approach:

  1. Incrementally add more measures to our benchmark predictor set. For example:

    Predictor Set

    Measures

    (a) Benchmark

    This set is linked to current practice and allows us to see whether predictive analytics with more complex methods can offer improvement.

    • whether student had more than 15 absences in 9th grade

    • whether student had any expulsions or suspensions in 9th grade

    • 9th grade GPA

    (b) Expanded

    This set adds measures that stakeholders think are important, but the list is still limited for transparency and explainability.

    • benchmark measures plus…

    • number of absences in 9th grade

    • ELA score on 9th grade standardized test

    • Math score on 9th grade standardized test

    • Whether received free or reduced price lunch

    (c) Kitchen sink

    This set moves beyond stakeholder input and allows us to see if adding lots of measures adds up to worthwhile improvements.

    • All measures available in administrative records after completion of 9th grade, except demographics.1

    (d) Kitchen sink plus

    This set allows us to see if a tricky data integration would be worth it

    • All measures in the kitchen sink predictor set plus a set of measures available in an external data set that could be integrated with administrative records, if inclusion of the measures in the external data lead to a substantial improvement.
  1. Combine the predictor sets with different modeling approaches. Here we consider whether different machine learning algorithms improve performance. We want to use knowledge about how the algorithms work when matching them with predictor sets. For example, some machine learning algorithms are better suited for large numbers of predictors than others. In our example, we might specify as follows:
Predictor Set Modeling Approach
Benchmark logistic regression
Expanded logistic regression, random forest
Kitchen sink random forest, support vector machines, Lasso
Kitchen sink plus random forest, support vector machines, Lasso

This results in a total of nine learners because we have nine combinations of predictor sets and modelling approaches. Ultimately, we will then have nine predictive models that result from the nine learners.

Note: Machine learning algorithms typically require specifications for how they are implemented. These are called “tuning parameters.” For example, the random forest algorithm has tuning parameters that include the number of decision trees in a forest, the maximum depth of the decision trees, and several more options. Different values of tuning parameters have different implications for the performance of the algorithm. While the analyst can specify these tuning parameters, it is typically best to use a data-driven approach. We will dive more into this later. However, the point to take away here is that we will “tune” each machine learning algorithm for each predictor set. So when we ultimately compare learners, we are comparing “tuned” versions of them.

  1. Compare the learners in terms of predictive performance and bias: This is a big topic which we will divide up across many of the following sections. Importantly, as we will turn to next, we want to compare learners’ performance on new, unseen data. This evaluation is important because the goal of predictive analytics is to develop models that can make accurate predictions for new people that the model hasn’t encountered before.
Back to top

Footnotes

  1. Demographic information like race is excluded for ethics reasons. This decision will be discussed more in-depth when we discuss algorithmic bias.↩︎