Ranked Features by Importance
(SHAP Bar Plot)SHAP Bar Plot
What it is: A global importance summary where each bar equals mean(|SHAP|) for a feature across all samples; larger bars = larger average contribution (in magnitude) to the model’s predictions.
How to use it:
- Read top→bottom: features at the top drive predictions the most on average.
- Compare bar lengths, not signs—this plot uses absolute values, so it ranks *influence*, not direction.
- Use together with beeswarm/dependence to learn *how* a top feature pushes predictions up or down.
Why it matters:
- Prioritizes variables for **design focus** and **experiment budget** in BO.
- Flags candidates for **dimension reduction** or **regularization** when tails are long but bars are small.
- Guides **feature engineering**: top drivers merit finer grids or interaction checks in PDPs.
Limitations:
- Averaging |SHAP| **hides directionality** (increase vs. decrease).
- Correlated features can **split importance** across substitutes.
- Global ranking may overlook **localized effects** important in a small operating window.