In [1]:
import pandas as pd
import numpy as np
import pickle
import xgboost as xgb
import os
import shap
import matplotlib.pyplot as pl
In [2]:
shap.initjs()
In [3]:
ModelsDir = '/home/kate/Research/Property/Models/'
DataDir = '/home/kate/Research/Property/Data/'
In [4]:
training_normal_dataset = pd.read_csv('%sproperty_wcs_training_for_normal.csv'%DataDir, error_bad_lines=False, index_col=False)
training_gamma_dataset = pd.read_csv('%sproperty_wcs_training_for_gamma.csv'%DataDir, error_bad_lines=False, index_col=False)
In [5]:
Model_Linear_Reg = 'wc_Linear_Reg_XGB_mae_0'
Model_LogRegObj_Reg='wc_LogRegObj_Reg_XGB_mae_0'
Model_Gamma_Reg='wc_Gamma_Reg_XGB_mae_0'
In [6]:
target_column = 'cova_il_nc_water'
In [7]:
featureset=[
 'cova_deductible',
 'roofcd_encd',
 'sqft',
 'usagetype_encd',
 'yearbuilt',
 'cova_limit',
 'water_risk_sev_3_blk',
 'ecy'
]
In [8]:
#
X_normal=training_normal_dataset[featureset]
Dtrain_normal = xgb.DMatrix(X_normal.values)
#
X_gamma=training_gamma_dataset[featureset]
Dtrain_gamma = xgb.DMatrix(X_gamma.values)
In [9]:
xgb_model_file='%s%s.model'%(ModelsDir,Model_Linear_Reg)
xgb_model_Linear_Reg = pickle.load(open(xgb_model_file, 'rb'))
#
xgb_model_file='%s%s.model'%(ModelsDir,Model_LogRegObj_Reg)
xgb_model_LogRegObj_Reg = pickle.load(open(xgb_model_file, 'rb'))
#
xgb_model_file='%s%s.model'%(ModelsDir,Model_Gamma_Reg)
xgb_model_Gamma_Reg = pickle.load(open(xgb_model_file, 'rb'))

Explain the models

In [10]:
explainer_Linear_Reg = shap.TreeExplainer(xgb_model_Linear_Reg, model_output='raw')
Setting feature_perturbation = "tree_path_dependent" because no background data was given.
In [11]:
explainer_LogRegObj_Reg = shap.TreeExplainer(xgb_model_LogRegObj_Reg, model_output='raw')
Setting feature_perturbation = "tree_path_dependent" because no background data was given.
In [12]:
explainer_Gamma_Reg = shap.TreeExplainer(xgb_model_Gamma_Reg, model_output='raw')
Setting feature_perturbation = "tree_path_dependent" because no background data was given.

SHAP values sum to the difference between the expected output of the model and the current output for the current player. Note that for the Tree SHAP implmementation the margin output of the model is explained, not the trasformed output (such as a probability for logistic regression). This means that the units of the SHAP values for this model are log odds ratios. Large positive values mean a player is likely to win, while large negative values mean they are likely to lose.

In [13]:
shap_values_Linear_Reg = explainer_Linear_Reg.shap_values(Dtrain_normal)
In [14]:
shap_values_LogRegObj_Reg = explainer_LogRegObj_Reg.shap_values(Dtrain_normal)
In [15]:
shap_values_Gamma_Reg = explainer_Gamma_Reg.shap_values(Dtrain_gamma)
In [16]:
training_normal_dataset[training_normal_dataset[target_column]==training_normal_dataset[target_column].min()].head(1)
Out[16]:
cova_deductible roofcd_encd sqft log_sqft usagetype_encd yearbuilt log_yearbuilt cova_limit log_cova_limit water_risk_sev_3_blk ... log_cova_il_nc_water_5 log_cova_il_nc_water_10 fold_0 fold_1 fold_2 fold_3 fold_4 log_cova_il_nc_water lin_reg_xgb_mae LogRegObj_reg_xgb_mae
1947 2500 7 2300 7.740664 6 2001 7.601402 500000 13.122363 164 ... 4.616703 4.616703 1 1 1 1 0 4.616703 4288.818435 5695.108614

1 rows × 30 columns

In [17]:
shap.force_plot(explainer_Linear_Reg.expected_value, shap_values_Linear_Reg[1947,:], X_normal.iloc[1947,:])
Out[17]:
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
In [18]:
shap.force_plot(explainer_LogRegObj_Reg.expected_value, shap_values_LogRegObj_Reg[1947,:], X_normal.iloc[1947,:])
Out[18]:
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
In [19]:
training_gamma_dataset[training_gamma_dataset[target_column]==training_gamma_dataset[target_column].min()].head(1)
Out[19]:
cova_deductible roofcd roofcd_encd sqft log_sqft usagetype_encd usagetype yearbuilt log_yearbuilt cova_limit ... cova_il_nc_water_1 cova_il_nc_water_5 cova_il_nc_water_10 fold_0 fold_1 fold_2 fold_3 fold_4 cova_il_nc_water gamma_reg_xgb_mae
1871 2500 TILE 7 2300 7.740664 6 RENTAL 2001 7.601402 500000 ... 101.16 101.16 101.16 1 0 1 1 1 101.16 7138.306519

1 rows × 27 columns

In [20]:
shap.force_plot(explainer_Gamma_Reg.expected_value, shap_values_Gamma_Reg[1871,:], X_gamma.iloc[1871,:])
Out[20]:
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
In [21]:
training_normal_dataset[training_normal_dataset[target_column]==training_normal_dataset[target_column].max()].head(1)
Out[21]:
cova_deductible roofcd_encd sqft log_sqft usagetype_encd yearbuilt log_yearbuilt cova_limit log_cova_limit water_risk_sev_3_blk ... log_cova_il_nc_water_5 log_cova_il_nc_water_10 fold_0 fold_1 fold_2 fold_3 fold_4 log_cova_il_nc_water lin_reg_xgb_mae LogRegObj_reg_xgb_mae
4637 2500 7 5000 8.517193 7 2005 7.603399 1000000 13.815511 198 ... 8.517193 9.21034 1 1 1 1 0 13.16683 14411.048815 12087.848707

1 rows × 30 columns

In [22]:
shap.force_plot(explainer_Linear_Reg.expected_value, shap_values_Linear_Reg[4637,:], X_normal.iloc[4637,:])
Out[22]:
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
In [23]:
shap.force_plot(explainer_LogRegObj_Reg.expected_value, shap_values_LogRegObj_Reg[4637,:], X_normal.iloc[4637,:])
Out[23]:
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.
In [24]:
training_gamma_dataset[training_gamma_dataset[target_column]==training_gamma_dataset[target_column].max()].head(1)
Out[24]:
cova_deductible roofcd roofcd_encd sqft log_sqft usagetype_encd usagetype yearbuilt log_yearbuilt cova_limit ... cova_il_nc_water_1 cova_il_nc_water_5 cova_il_nc_water_10 fold_0 fold_1 fold_2 fold_3 fold_4 cova_il_nc_water gamma_reg_xgb_mae
2682 1000 COMPO 8 1400 7.244228 6 RENTAL 1990 7.59589 300000 ... 1000.0 5000.0 10000.0 1 1 1 1 0 46547.85 9520.887085

1 rows × 27 columns

In [25]:
shap.force_plot(explainer_Gamma_Reg.expected_value, shap_values_Gamma_Reg[2682,:], X_gamma.iloc[2682,:])
Out[25]:
Visualization omitted, Javascript library not loaded!
Have you run `initjs()` in this notebook? If this notebook was from another user you must also trust this notebook (File -> Trust notebook). If you are viewing this notebook on github the Javascript has been stripped for security. If you are using JupyterLab this error is because a JupyterLab extension has not yet been written.

Summarize the impact of all features over the entire dataset

A SHAP value for a feature of a specific prediction represents how much the model prediction changes when we observe that feature. In the summary plot below we plot all the SHAP values for a single feature (such as goldearned) on a row, where the x-axis is the SHAP value (which for this model is in units of log odds of winning). By doing this for all features, we see which features drive the model's prediction a lot (such as goldearned), and which only effect the prediction a little (such as kills). Note that when points don't fit together on the line they pile up vertically to show density. Each dot is also colored by the value of that feature from high to low.

In [26]:
shap.summary_plot(shap_values_Linear_Reg, X_normal)
In [27]:
shap.summary_plot(shap_values_LogRegObj_Reg, X_normal)
In [28]:
shap.summary_plot(shap_values_Gamma_Reg, X_gamma)

Examine how changes in a feature change the model's prediction

The XGBoost model we trained above is very complicated, but by plotting the SHAP value for a feature against the actual value of the feature for all players we can see how changes in the feature's value effect the model's output. Note that these plots are very similar to standard partial dependence plots, but they provide the added advantage of displaying how much context matters for a feature (or in other words how much interaction terms matter). How much interaction terms effect the importance of a feature is capture by the vertical dispersion of the data points. For example earning only 100 gold/min during a game may lower your logg odds of winning by 10 for some players or only 3 for others. Why is this? Because other features of these players effect how much earning gold matters for winning the game. Note that the vertical spread narrows once you earn at least 500 gold/min, meaning the context of other features matters less for high gold earners than low gold earners. We color the datapoints with another feature that most explains the interaction effect variance. For example earning less gold is less bad if you have not died very much, but it is really bad if you also die a lot.

The y-axis in the plots below represents the SHAP value for that feature, so -4 means observing that feature lowers your log odds of winning by 4, while a value of +2 means observing that feature raises your log odds of winning by 2.

Note that these plot just explain how the XGBoost model works, not nessecarily how reality works. Since the XGBoost model is trained from observational data, it is not nessecarily a causal model, and so just because changing a factor makes the model's prediction of winning go up, does not always mean it will raise your actual chances.

In [29]:
shap.dependence_plot('water_risk_sev_3_blk', shap_values_Linear_Reg, X_normal, interaction_index='yearbuilt')
shap.dependence_plot('water_risk_sev_3_blk', shap_values_LogRegObj_Reg, X_normal, interaction_index='yearbuilt')
shap.dependence_plot('water_risk_sev_3_blk', shap_values_Gamma_Reg, X_gamma, interaction_index='yearbuilt')
In [30]:
shap.dependence_plot('water_risk_sev_3_blk', shap_values_Linear_Reg, X_normal, interaction_index='sqft')
shap.dependence_plot('water_risk_sev_3_blk', shap_values_LogRegObj_Reg, X_normal, interaction_index='sqft')
shap.dependence_plot('water_risk_sev_3_blk', shap_values_Gamma_Reg, X_gamma, interaction_index='sqft')
In [31]:
shap.dependence_plot('water_risk_sev_3_blk', shap_values_Linear_Reg, X_normal, interaction_index='ecy')
shap.dependence_plot('water_risk_sev_3_blk', shap_values_LogRegObj_Reg, X_normal, interaction_index='ecy')
shap.dependence_plot('water_risk_sev_3_blk', shap_values_Gamma_Reg, X_gamma, interaction_index='ecy')
In [32]:
shap.dependence_plot('water_risk_sev_3_blk', shap_values_Linear_Reg, X_normal, interaction_index='cova_deductible')
shap.dependence_plot('water_risk_sev_3_blk', shap_values_LogRegObj_Reg, X_normal, interaction_index='cova_deductible')
shap.dependence_plot('water_risk_sev_3_blk', shap_values_Gamma_Reg, X_gamma, interaction_index='cova_deductible')
In [33]:
shap.dependence_plot('yearbuilt', shap_values_Linear_Reg, X_normal, interaction_index='sqft')
shap.dependence_plot('yearbuilt', shap_values_LogRegObj_Reg, X_normal, interaction_index='sqft')
shap.dependence_plot('yearbuilt', shap_values_Gamma_Reg, X_gamma, interaction_index='sqft')