from causalnlp import CausalInferenceModel
from causalnlp import Autocoder
What is the causal impact of a positive review on product views?
We use a semi-simulated dataset generated from this repo, which is available in the sample_data
folder. The reviews and product types are real, while the outcomes (e.g., 1=product clicked, 0=not clicked) are simulated.
import pandas as pd
df = pd.read_csv('sample_data/music_seed50.tsv', sep='\t', error_bad_lines=False)
df.head()
Y_sim
is the simulated outcome indicating whether or not the product was clicked. C_true
is a categorical variable, where 1 is an audio CD and and 0 is something else (e.g., MP3). In this dataset, outcomes were simulated such that C_true
is a counfounding variable for this problem.
The treatment is whether or not the review is positive, which affects Y_sim
. Let's pretend we don't have a rating and need to infer this from text using the Autocoder
. This can be done with:
ac = Autocoder()
df = ac.code_sentiment(df['text'].values, df, batch_size=16, binarize=True)
df['T_ac'] = df['positive']
We've already created this as the T_ac
column (along with the positive
and negative
columns), so invoking the above is not needed. Note that T_ac
is an imperfect approximation of T_true
. In CausalNLP, we can include the raw text as covariates to improve our estimates.
Let's fit the causal inference model. We will adjust for both C_true
and the raw text of the review to minimize bias from confounding. CausalNLP supports the following metalearners: S-Learner, T-Learner, X-Learner, and R-Learner. See this paper for more information on these. We will use the T-Learner as the metalearner here. By default, T-Learners use LightGBM classifiers with 31 leaves. Let's increase the number of leaves to 500. In practice, you can supply a learner with hyperparameters that you've tuned beforehand to accurately predict the outcome.
from lightgbm import LGBMClassifier
from sklearn.linear_model import LogisticRegression, LinearRegression
cm = CausalInferenceModel(df, method='t-learner',
learner=LGBMClassifier(num_leaves=500),
treatment_col='T_ac',
outcome_col='Y_sim',
text_col='text',
include_cols=['C_true'])
cm.fit()
cm.estimate_ate()
The overall ATE is an increase of 13 percentage points in probability.
Unlike machine learning, there is no ground truth to which our estimate can be compared for causal inference on real-world datasets. Hoewver, since this is a simulated dataset, we can compare our estimate with the ground truth ATE of 0.1479
(14.79 percentage point change in outcome), and our estimate is close.
from collections import defaultdict
import numpy as np
def ATE_adjusted(C, T, Y):
x = defaultdict(list)
for c, t, y in zip(C, T, Y):
x[c, t].append(y)
C0_ATE = np.mean(x[0,1]) - np.mean(x[0,0])
C1_ATE = np.mean(x[1,1]) - np.mean(x[1,0])
return np.mean([C0_ATE, C1_ATE])
print(ATE_adjusted(df.C_true, df.T_true, df.Y_sim))
Such oracle estimates are not available for real-world datsets, as mentioned. For real-world scenarios, we can, at least, evaluate the robustness of the ATE estimate to various data manipuations (i.e., sensitivity analysis or refutation).
cm.evaluate_robustness()
Here, we see the distance from the desired value is near zero for each sensitivy analysis method , which is good.
series = df['text']
cm.estimate_ate(df['text'].str.contains('toddler'))
Individualized Treatment Effect (ITE)
We can easily predict the treatment effect for new or existing observations on a per-unit basis. We just need to make sure the DataFrame supplied as input to CausalInferenceModel.predict
contains the right columns. This can easily be checked with CausalInferenceModel.get_required_columns
:
cm.get_required_columns()
test_df = pd.DataFrame({
'T_ac' : [1],
'C_true' : [1],
'text' : ['I love the music of Zamfir and his pan flute.']
})
cm.predict(test_df)
Model Interpetability
We can use the interpret
method to identify the attributes most predictive of individualized treatment effects across observations. Features begnning with v_
are word (or vocabulary) features. We see that words like "music", "cd", and "love" in addition to the categorical attribute C_true
(the known confounder which is 1 for audio CDs) are most predictive of individualized causal effects.
cm.interpret(plot=False, method='feature_importance')[1][:10]
cm.explain(test_df, row_num=0)
What is the causal impact of having a PhD on making over $50K?
Text is Optional in CausalNLP
Despite the "NLP" in the name, CausalNLP can be used for causal analyses on traditional tabular datasets with no text fields.
Note:This dataset is from the early to mid 1990s, and we are using it as a toy dataset for demonstration purposes only.
import pandas as pd
df = pd.read_csv('sample_data/adult-census.csv')
df = df.rename(columns=lambda x: x.strip())
df = df.applymap(lambda x: x.strip() if isinstance(x, str) else x)
filter_set = 'Doctorate'
df['treatment'] = df['education'].apply(lambda x: 1 if x in filter_set else 0)
df.head()
from causalnlp import CausalInferenceModel
cm = CausalInferenceModel(df, method='t-learner',
treatment_col='treatment',
outcome_col='class',
ignore_cols=['fnlwgt', 'education','education-num']).fit()
Overall, the average treatment effect of having a PhD is an increase of 20 percentage points in the probability of making over $50K (with respect to this model and dataset):
cm.estimate_ate()
For those who have a Master's degree:
cm.estimate_ate(cm.df['education'] == 'Masters')
For those who are high school dropouts:
cm.estimate_ate(cm.df['education'].isin(['Preschool', '1st-4th', '5th-6th', '7th-8th', '9th', '10th', '12th']))
What is the causal impact of a job training program on earnings?
This is another example of causal inference on purely tabular data (no text). Here, we will use the famous LaLonde dataset from a job training study.
import pandas as pd
df = pd.read_csv('sample_data/lalonde.csv')
df.head()
Unlike other meta-learners that use LightGBM as a default, the S-Learner uses Linear Regression as the default base learner for regression problems, which is a model that is often used for this dataset. The ATE estimate is $1548, which indicates that the job training program had an overall positive effect.
from causalnlp import CausalInferenceModel
cm = CausalInferenceModel(df, method='s-learner',
treatment_col='treat',
outcome_col='re78',
include_cols=['age', 'educ', 'black', 'hispan', 'married', 'nodegree', 're74', 're75'])
cm.fit()
print(cm.estimate_ate()) # ATE estimate = $1548