Blog Detail

Home Blog

The Amazing Efficacy of Cluster-based Feature Selection

One major impediment to widespread adoption of machine learning (ML) in investment management is their black-box nature: how would you explain to an investor why the machine makes a certain prediction? What’s the intuition behind a certain ML trading strategy? How would you explain a major drawdown? This lack of “interpretability” is not just a problem for financial ML, it is a prevalent issue in applying ML to any domain. If you don’t understand the underlying mechanisms of a predictive model, you may not trust its predictions.

Feature importance ranking goes a long way towards providing better interpretability to ML models. The feature importance score indicates how much information a feature contributes when building a supervised learning model. The importance score is calculated for each feature in the dataset, allowing the features to be ranked. The investor can therefore see the most important predictors (features) used in the predictions, and in fact apply “feature selection” to only include those important features in the predictive model. However, as my colleague Nancy Xin Man and I have demonstrated in Man and Chan 2021a, common feature selection algorithms (e.g. MDA, LIME, SHAP) can exhibit high variability in the importance rankings of features: different random seeds often produce vastly different importance rankings. For e.g. if we run MDA on some cross validation set multiple times with different seeds, it is possible that a feature in a run is ranked at the top of the list but dropped to the bottom in the next run. This variability of course eliminates any interpretability benefit of feature selection. Interestingly, despite this variability in importance ranking, feature selection still generally improves out-of-sample predictive performance on multiple data sets that we tested in the above paper. This may be due to the “substitution effect”: many alternative (substitute) features can be used to build predictive models with similar predictive power. (In linear regression, substitution effect is called “collinearity”.)

To reduce variability (or what we called instability) in feature importance rankings and to improve interpretability, we found that LIME is generally preferable to SHAP, and definitely preferable to MDA. Another way to reduce instability is to increase the number of iterations during runs of the feature importance algorithms. In a typical implementation of MDA, every feature is permuted multiple times. But standard implementations of LIME and SHAP have set the number of iterations to 1 by default, which isn’t conducive to stability. In LIME, each instance and its perturbed samples only fit one linear model, but we can perturb them multiple times to fit multiple linear models. In SHAP, we can permute the samples multiple times. Our experiments have shown that instability of the top ranked features do approximately converge to some minimum as the number of iterations increases; however, this minimum is not zero. So there remains some residual variability of the top ranked features, which may be attributable to the substitution effect as discussed before.

To further improve interpretability, we want to remove the residual variability. López de Prado, M. (2020) described a clustering method to cluster together features are that are similar and  should receive the same importance rankings. This promises to be a great way to remove the substitution effect. In our new paper Man and Chan 2021b , we applied a hierarchical clustering methodology prior to MDA feature selection to the same data sets we studied previously. This method is generally called cMDA. As they say in social media click baits, the results will (pleasantly) surprise you.

For the benchmark breast cancer dataset, the top two clusters found were:

TopicCluster Importance ScoresFeatures
Geometry summary0.360

‘mean radius’,

‘mean perimeter’,

‘mean area’,

‘mean compactness’,

‘mean concavity’,

‘mean concave points’,

‘radius error’,

‘perimeter error’,

‘area error’,

‘worst radius’,

‘worst perimeter’,

‘worst area’,

‘worst compactness’,

‘worst concavity’,

‘worst concave points’

Texture summary0.174‘mean texture’, ‘worst texture’
Geometry error 0.112

‘compactness error’,

‘concavity error’,

‘concave points error’,

‘fractal dimension error’

Smoothness error0.092‘smoothness error’
Symmetry error0.062‘symmetry error’
Texture error0.056‘texture error’
Symmetry summary0.055‘mean symmetry’, ‘worst symmetry’
Fractal dimension0.049‘mean fractal dimension’, ‘worst fractal dimension’
Smoothness summary0.042‘mean smoothness’, ‘worst smoothness’

Closer to our financial focus, we also applied cMDA to a public dataset with features that may be useful for predicting S&P 500 index excess monthly returns. The two clusters found areNot only do these clusters have clear interpretations (provided by us as a “Topic”), these clusters almost never change in their top importance rankings under 100 random seeds!

Submit your response

Your email address will not be published. Required fields are marked *