top of page

Distilling a Random Forest to a single DecisionTree

Updated: Jul 29, 2021


On HackerNews there was a topic discussed at some point about ways to distill knowledge from a complex (almost black box) large tree ensemble (a RandomForest with lots of sub-trees was used as an example).


You would like to do this for multiple reasons, but one of them is model explainability, so a way to understand how that complex model behaves so you can draw conclusions and improve it (or guard against its failures).


One comment really caught my eye:

An alternative is to re-label data with the ensemble’s outputs and then learn a decision tree over that. (source)

This post is my attempt of testing this strategy out.


85 views0 comments

Comments


bottom of page