Metalearner Explainer
Explainer
Explainer (method, control_name, X, tau, classes, model_tau=None, features=None, normalize=True, test_size=0.3, random_state=None, override_checks=False, r_learners=None)
*The Explainer class handles all feature explanation/interpretation functions, including plotting feature importances, shapley value distributions, and shapley value dependency plots.
Currently supported methods are: - auto (calculates importance based on estimator’s default implementation of feature importance; estimator must be tree-based) Note: if none provided, it uses lightgbm’s LGBMRegressor as estimator, and “gain” as importance type - permutation (calculates importance based on mean decrease in accuracy when a feature column is permuted; estimator can be any form) - shapley (calculates shapley values; estimator must be tree-based) Hint: for permutation, downsample data for better performance especially if X.shape[1] is large
Args: method (str): auto, permutation, shapley control_name (str/int/float): name of control group X (np.matrix): a feature matrix tau (np.array): a treatment effect vector (estimated/actual) classes (dict): a mapping of treatment names to indices (used for indexing tau array) model_tau (sklearn/lightgbm/xgboost model object): a model object features (np.array): list/array of feature names. If None, an enumerated list will be used. normalize (bool): normalize by sum of importances if method=auto (defaults to True) test_size (float/int): if float, represents the proportion of the dataset to include in the test split. If int, represents the absolute number of test samples (used for estimating permutation importance) random_state (int/RandomState instance/None): random state used in permutation importance estimation override_checks (bool): overrides self.check_conditions (e.g. if importance/shapley values are pre-computed) r_learners (dict): a mapping of treatment group to fitted R Learners*