.. _autodoc-configuration-ref: Creating & Configuring H2O AutoDoc ===================================== The H2O AutoDoc requires a running H2O Cluster, a trained model, and access to the datasets used to train the model. This section includes the code examples for setting up a model, along with basic and advanced H2O AutoDoc configurations. If you want to experiment with a complete end-to-end example, run the :ref:`build-h2o-model-ref` code example before running one of the H2O AutoDoc-specific examples. - Setup: - :ref:`build-h2o-model-ref` - Basic configurations: - :ref:`generate-default-autodoc-ref` - :ref:`specify-file-type-ref` - Advanced configurations: - :ref:`specify-mli-frame-ref` - :ref:`specify-pdp-features-ref` - :ref:`specify-ice-frame-ref` - :ref:`enable-shapley-values-ref` - :ref:`specify-additional-testsets-ref` - :ref:`specify-alternative-models-ref` .. _build-h2o-model-ref: Building an H2O Model ~~~~~~~~~~~~~~~~~~~~~ .. code-block:: python # import h2o and initialize h2o cluster import h2o from h2o.estimators.gbm import H2OGradientBoostingEstimator h2o.init() # import datasets for training and validation train_path = "https://s3.amazonaws.com/h2o-training/events/ibm_index/CreditCard_Cat-train.csv" valid_path ="https://s3.amazonaws.com/h2o-training/events/ibm_index/CreditCard_Cat-test.csv" # import the train and valid dataset train = h2o.import_file(train_path, destination_frame='CreditCard_Cat-train.csv') valid = h2o.import_file(valid_path, destination_frame='CreditCard_Cat-test.csv') # set predictors and response predictors = train.columns predictors.remove('ID') response = "DEFAULT_PAYMENT_NEXT_MONTH" # convert target to factor train[response] = train[response].asfactor() valid[response] = valid[response].asfactor() # assign IDs for later use h2o.assign(train, "CreditCard_TRAIN") h2o.assign(valid, "CreditCard_VALID") # build an H2O-3 GBM Model gbm = H2OGradientBoostingEstimator(model_id="gbm_model", seed=1234) gbm.train(x = predictors, y = response, training_frame = train, validation_frame = valid) .. _generate-default-autodoc-ref: Generate a Default H2O AutoDoc ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. code-block:: python # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # get H2O-3 objects required to create an automatic report model = h2o.get_model("gbm_model") # specify the full path to the output file # replace the path below with your own path output_file_path = "full/path/to/your/autodoc/autodoc_report.docx" config = Config(output_path=output_file_path) # render the autodoc render_autodoc(h2o, config, model) .. _specify-file-type-ref: Set the H2O AutoDoc File Type ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The H2O AutoDoc can generate a Word document or markdown file. The default report is a Word document (e.g., docx). **Word Document** .. code-block:: python # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # get H2O-3 objects required to create an automatic report model = h2o.get_model("gbm_model") # specify the path to the output file output_file_path = "path/to/your/autodoc/my_word_report.docx" # no configuration required, as the default is a word document config = Config(output_path=output_file_path) # render the autodoc render_autodoc(h2o, config, model) **Markdown File** Note when the **main_template_type** is set to **"md"** a zip file is returned. This zip file contains the markdown file and any images that are linked in the markdown file. .. code-block:: python # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # get H2O-3 objects required to create an automatic report model = h2o.get_model("gbm_model") # specify the path to the output file output_file_path = "path/to/your/autodoc/my_markdown_report.md" # set the exported report to markdown ('md') main_template_type = "md" config = Config(output_path=output_file_path, main_template_type=main_template_type) # render the autodoc render_autodoc(h2o, config, model) .. _specify-mli-frame-ref: Model Interpretation Dataset ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The H2O AutoDoc report can include partial dependence plots (PDPs) and Shapley value feature importance. By default, these calculations are done on the training frame. You can use the **mli_frame** (short for machine learning interpretability dataframe) Config parameter to specify a different dataset on which to perform these calculations. In the example below, we will specify that the machine learning interpretability (MLI) calculations are done on our model's validation dataset, instead of the training dataset. .. code-block:: python # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # get H2O-3 objects required to create an automatic report model = h2o.get_model("gbm_model") # specify the path to the output file output_file_path = "path/to/your/autodoc/my_mli_report.docx" # specify the H2OFrame on which the partial dependence and Shapley values can be calculated # here 'valid' was created in the Build H2O Model code example mli_frame = valid config = Config(output_path=output_file_path, mli_frame=mli_frame) # render the autodoc render_autodoc(h2o, config, model) .. _specify-pdp-features-ref: Partial Dependence Features ~~~~~~~~~~~~~~~~~~~~~~~~~~~ The H2O AutoDoc report includes partial dependence plots (PDPs). By default, PDPs are shown for the top 20 features. This selection is based the model's built-in variable importance (referred to as Native Importance in the report). You can override the default behavior with the **pdp_feature_list** parameter, and specify your own list of features to show in the report. .. code-block:: python # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # get H2O-3 objects required to create an automatic report model = h2o.get_model("gbm_model") # specify the path to the output file output_file_path = "path/to/your/autodoc/my_pdp_report.docx" # specify the features you want PDP plots # here the feature came from predictors used in the Build H2O Model code example. pdp_feature_list = ["EDUCATION", "LIMIT_BAL", "AGE"] config = Config(output_path=output_file_path, pdp_feature_list=pdp_feature_list) # render the autodoc render_autodoc(h2o, config, model) .. _specify-ice-frame-ref: Specify ICE Records ~~~~~~~~~~~~~~~~~~~ The H2O AutoDoc can overlay partial dependence plots with individual conditional expectation (ICE) plots. You can specify which observations (aka rows) you'd like to plot (manual selection), or you can let H2O AutoDoc automatically select observations. **Manual Selection** .. code-block:: python # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # get H2O-3 objects required to create an automatic report model = h2o.get_model("gbm_model") # specify the path to the output file output_file_path = "path/to/your/autodoc/my_manual_ice_report.docx" # specify an H2OFrame composed of the records you want shown in the ICE plots # here 'valid' was created in the Build H2O Model code example - we use the first 2 rows. ice_frame = valid[:2, :] config = Config(output_path=output_file_path, ice_frame=ice_frame) # render the autodoc render_autodoc(h2o, config, model) **Automatic Selection** The **num_ice_rows** Config parameter controls the number of observations selected for an ICE plot. This feature is disabled by default (i.e., set to 0). Observations are selected by binning the predictions into N quantiles and selecting the first observation in each quantile. .. code-block:: python # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # get H2O-3 objects required to create an automatic report model = h2o.get_model("gbm_model") # specify the path to the output file output_file_path = "path/to/your/autodoc/my_auto_ice_report.docx" # specify the number of rows you want automatically selected for ICE plots num_ice_rows = 3 config = Config(output_path=output_file_path, num_ice_rows=num_ice_rows) # render the autodoc render_autodoc(h2o, config, model) .. _enable-shapley-values-ref: Enable Shapley Values ~~~~~~~~~~~~~~~~~~~~~ Shapley values are provided for supported H2O-3 Algorithms. (For supported algorithms, see the `H2O-3 user guide `_.) **Note**: Shapley values are disabled by default because they can take a long time to complete for wide datasets. .. code-block:: python # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # get H2O-3 objects required to create an automatic report model = h2o.get_model("gbm_model") # specify the path to the output file output_file_path = "path/to/your/autodoc/my_shapley_report.docx" # enable shapley values use_shapley = True config = Config(output_path=output_file_path, use_shapley=use_shapley) # render the autodoc render_autodoc(h2o, config, model) .. _specify-additional-testsets-ref: Provide Additional Testsets ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can provide a list of additional testsets (each of which is an H2OFrame) to the **render_autodoc()** function. Performance metrics, plots, and tables will be created for each of these additional datasets. .. code-block:: python # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # get H2O-3 objects required to create an automatic report model = h2o.get_model("gbm_model") # specify the path to the output file output_file_path = "path/to/your/autodoc/my_additional_testsets_report.docx" config = Config(output_path=output_file_path # specify additional testsets full_test_data = h2o.import_file("https://s3.amazonaws.com/h2o-training/events/ibm_index/CreditCard_Cat-test.csv") test1, test2 = full_test_data.split_frame(ratios=[.5], seed=1234, destination_frames=['mytest1', 'mytest2']) # render the autodoc render_autodoc(h2o, config, model, additional_testsets=[test1, test2]) .. _specify-alternative-models-ref: Provide Alternative Models ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ You can provide a list of alternative models to the **render_autodoc()** function. This creates alternative model tables with parameters that a user can grid over (i.e, traditional hyperparameters plus parameters that you can grid over). **Code Example** .. code-block:: python # run AutoML to create several models import h2o from h2o.automl import H2OAutoML h2o.init() # import the titanic dataset from Amazon S3 titanic = h2o.import_file( "https://s3.amazonaws.com/h2o-public-test-data/" "smalldata/gbm_test/titanic.csv", destination_frame="titanic_all", ) # specify the predictors and response predictors = ["home.dest", "cabin", "embarked", "age"] response = "survived" titanic["survived"] = titanic["survived"].asfactor() # split the titanic dataset into train, valid, and test train, valid, test = titanic.split_frame( ratios=[0.8, 0.1], destination_frames=["titanic_train", "titanic_valid", "titanic_test"], ) # run AutoML automl = H2OAutoML(max_models=3, seed=1) automl.train( predictors, response, training_frame=train, validation_frame=valid, ) board = automl.leaderboard.as_data_frame() # build a report on the best performing model best_model = automl.leader # compare the best model to the other models in leaderboard models = [h2o.get_model(x) for x in board["model_id"][1:]] # import h2o_autodoc from h2o_autodoc import Config from h2o_autodoc import render_autodoc # specify the path to the output file output_file_path = "path/to/your/autodoc/my_alternative_models_report.docx" config = Config(output_path=output_file_path) # render a report with your best model and alternative models render_autodoc( h2o=h2o, config=config, model=best_model, train=train, valid=valid, test=test, alternative_models=models, )