Skip to content

Permutation importance with already trained model? #175

@Prometheus77

Description

@Prometheus77

In FilterPermutation.R, the permutation importance algorithm performs a complete resampling (train and predict) for each permuted column. In Breiman’s original paper introducing the technique for random forests, he used a pre-trained model and observed the effect of that feature on the performance of that specific model. This is consistent with how it is usually described in literature, as well as the scikit-learn implementation. It is also considerably less computationally expensive.

While there are potential upsides to retraining the model for each permutation, it seems like that shouldn’t be the default behavior. I’d like to propose that the default behavior should be:

  • Build the original unpermuted resample result and calculate the performance measure
  • Shuffle each column one by one and recalculate the performance measure without retraining
  • Return the result

There could be an option “retrain = FALSE” that could be set to TRUE in the case that the user wants to refit the model for each column.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions