I covered below topics in my presentation:
- What is Model Interpretability?
- Why do we need to build interpretable models?
- Accuracy Fallacy : Accurate model does not mean correct model
- How to create interpretable glassbox models using Explainable Boosting Machine ?
- How Explainable Boosting Machine models work?
- How to use interpret-ml library to obtain globally and locally important features?
- Case Study on probing, de-bugging, comapring two different models using interpret-ml
- Using Microsoft's 'Design Probe Thinking'