MLI Overview¶
Driverless AI provides robust interpretability of machine learning models to explain modeling results in a human-readable format. In the Machine Learning Interpetability (MLI) view, Driverless AI employs a host of different techniques and methodologies for interpreting and explaining the results of its models. A number of charts are generated automatically, including K-LIME, Shapley, Variable Importance, Decision Tree Surrogate, Partial Dependence, Individual Conditional Expectation, and more. Additionally, you can download a CSV of LIME and Shapley reasons codes from this view.
This chapter describes Machine Learning Interpretability (MLI) in Driverless AI for both regular and time-series experiments. Refer to the sections that follow for more information.
Additional Resources
Click here
to download our MLI cheat sheet.- “An Introduction to Machine Learning Interpetability” book.
- Click here to access the H2O.ai MLI Resources repository. This repo includes materials that illustrate applications or adaptations of various MLI techniques for practicing data scientists.
- Click here to view our H2O Driverless AI Machine Learning Interpretability walkthrough video.
Limitations
- This release deprecates experiments run in 1.7.0 and earlier. MLI will not be available for experiments from versions <= 1.7.0.
- MLI is not supported for NLP experiments or for multiclass Time Series experiments. Contact H2O support for assistance on interpreting NLP models.
- MLI does not require an Internet connection to run on current models.