This project focuses on the analysis of the data generated by an A/B Test ran on an e-commerce website. The goal of the analysis is to understand if having user reviews on the products improves the conversion rate.
This notebook contains the analysis broken down in 5 steps:
Experiment configuration;
Hypothesis testing and success rates;
Distribution plots of the samples;
Statistical power calculation;
Influence of sample size in the A/B Test.
# Imports
import datetime
import matplotlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import scipy.stats as scs
# Plot formatting
plt.style.use('fivethirtyeight')
plt.figure(1 , figsize = (15 , 6))
%matplotlib inline
Variant A is the control group.
Variant B is the test group.
# Loading data from CSV
df_sales = pd.read_csv('data/dataset.csv')
df_sales
# Data types
df_sales.dtypes
# Need to convert date from object to datetime
df_sales['date'] = pd.to_datetime(df_sales['date'], errors='coerce')
# Max date
df_sales['date'].max()
# Min date
df_sales['date'].min()
# Check for null values
df_sales.isnull().sum()
# Check for duplicate IDs
df_sales.id.value_counts().count()
# Check overall conversion rate
df_sales.purchase.value_counts()
# Check the number of samples of each variant
df_sales.variant.value_counts()
Calculating basic probabilities
# Probability of a user visualizing variant A
df_sales[df_sales.variant == 'A'].shape[0] / df_sales.shape[0] * 100
# Probability of a user visualizing variant B
df_sales[df_sales.variant == 'B'].shape[0] / df_sales.shape[0] * 100
# Total number of purchases realized
df_sales.purchase.sum()
# Number of purchases realized on variant A
df_sales[df_sales.variant == 'A'].purchase.sum()
# Number of purchases realized on variant B
df_sales[df_sales.variant == 'B'].purchase.sum()
# Overall conversion rate
df_sales.purchase.mean()
# Conversion rate for variant A
df_sales[df_sales.variant == 'A'].purchase.mean()
# Conversion rate for variant B
df_sales[df_sales.variant == 'B'].purchase.mean()
Do pages with user reviews increase the conversion rate?
Variant A: Shows comments and reviews from other users.
Variant B: Does not show comments and reviews on the produt page.
Given that there is a date associated with the records, it's technically possible to execute a continous hypothesis test as new events are observed. Although the challenge is to know whether to consider the test complete immediately after a variant is deemed superior or to coninue to run it for a period of time. Here since the test has already been completed the dataset will the considered as a whole without regard for the dates of the records.
Here we assume that Variant A is supeior, unless the new variant proves to generate higher conversion rates with a Type I Error rate of 5%, thus we define the hypotheses as:
H0 affirms that the probability difference among both gruops is equals to zero.
H1 affirms that the probability difference among both gruops is greater than zero.
# Find the start date for the A/B test
df_sales[df_sales.variant == 'B'].date.min()
# Find the end date for the A/B test
df_sales[df_sales.variant == 'B'].date.max()
Looks like the test ran throughout january 2020, so we'll keep the data only from that period.
# The A/B Test was ran throughout january 2020
df_sales_2020 = df_sales[df_sales['date'].dt.year == 2020]
df_sales_2020 = df_sales_2020[df_sales['date'].dt.month == 1]
df_sales_2020.shape
Establishing a baseline conversion rate before running the hypothesis test. This is done in order to know the baseline conversion rate and the desired increase in conversion rate prior to running the test.
df_ab_data = df_sales_2020[['variant', 'purchase']]
df_ab_data.shape
# Ranaming the columns
df_ab_data.columns = ['group', 'conversion']
df_ab_data.head()
# Pivot table to summarize the data
df_ab_summary = df_ab_data.pivot_table(values='conversion', index='group', aggfunc=np.sum).astype('int')
df_ab_summary
# Summary with total
df_ab_summary['total'] = df_ab_data.pivot_table(values='conversion', index='group', aggfunc=lambda x: len(x))
# Summary with rate
df_ab_summary['rate'] = df_ab_data.pivot_table(values='conversion', index='group', aggfunc=np.mean)
# Visualize
df_ab_summary
# Values for Variant A
conversion_A = df_ab_summary['conversion'][0]
total_A = df_ab_summary['total'][0]
rate_A = df_ab_summary['rate'][0]
print('Variant A')
print('Conversion:', conversion_A)
print('Total:', total_A)
print('Rate:', rate_A)
# Values for Variant B
conversion_B = df_ab_summary['conversion'][1]
total_B = df_ab_summary['total'][1]
rate_B = df_ab_summary['rate'][1]
print('Variant B')
print('Conversion:', conversion_B)
print('Total:', total_B)
print('Rate:', rate_B)
In the context of a binomial distribution the baseline conversion rate is equal to $p$, where $p$ is the probability of success.
# Baseline conversion rate
baseline_conversion = rate_A
baseline_conversion
# Minimum detectable effect
minimum_effect = rate_B - rate_A
minimum_effect
Running the hypothesis test and registering the success rate for each group.
Statistical power (or sensibility) is equals to 1 - $\beta$.
For most analyses a statistical power of 80% is used. This is the probability of rejecting the null hypothesis when it is in fact false.
Parameters being used to run the test:
1- Alpha (Significance Level) $\alpha$: usually 5%; probability of rejecting the null hypothesis when it's true.
2- Beta $\beta$: Probability of accepting the null hypothesis when in reality it's false.
# Test parameters
alpha = 0.05
beta = 0.2
# Sample size
n = df_sales_2020.shape[0]
We can assume the distribution of the control group in binomial since the data are a series of Bernoulli experiments in which each experiment has only two possible results.
For the test I'll be using the binom() function in SciPy: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.binom.html
# Binomial Test
binom_test = scs.binom(n, p=baseline_conversion)
# Binomial Test with minimum detectable effect
binom_test_mde = scs.binom(n, p=baseline_conversion+minimum_effect)
help(binom_test)
# Visualizing the probability mass functiin (pmf)
fig, ax = plt.subplots(figsize=(14,6))
# Defining values for the x-axis
x = np.linspace(0,int(n), int(n)+1)
# Plotting the results with a probability mass function
ax.bar(x, binom_test.pmf(x), label='Variant A (Control) PMF')
ax.bar(x, binom_test_mde.pmf(x), label='Variant B (Test) PMF')
ax.legend()
plt.show()
# Distribution plot for group A (control)
fig, ax = plt.subplots(figsize=(14,6))
# Test A
x = np.linspace(conversion_A-49, conversion_A+50, 100)
y = scs.binom(total_A, rate_A).pmf(x)
# Plots
ax.bar(x, y, alpha=0.5, label='Variant A (Control) PMF')
ax.axvline(x=rate_B*total_A, c='magenta', alpha=0.75, linestyle='--', label='Expected Conversions @ Variant B Conversion Rate')
ax.legend()
# Labels
plt.xlabel('Conversion')
plt.ylabel('Probability')
plt.show()
# Distribution plots for both groups
fig, ax = plt.subplots(figsize=(14,6))
# Variant A
xA = np.linspace(conversion_A-49, conversion_A+50, 100)
yA = scs.binom(total_A, rate_A).pmf(xA)
ax.bar(xA, yA, alpha=0.5, label='Variant A (Control) PMF')
# Variant B
xB = np.linspace(conversion_B-49, conversion_B+50, 100)
yB = scs.binom(total_B, rate_B).pmf(xB)
ax.bar(xB, yB, alpha=0.5, label='Variant B (Test) PMF')
# Labels
plt.xlabel('Conversion')
plt.ylabel('Probability')
ax.legend(loc='best')
plt.show()
It's evident the test group converted more users than the control group. Yet, it's equally evident that the probability associated with the results from the test group is lower than that of the control group.
Therefore, to properly compare the variants, we need to focus on the conversion rate, so that we are comparing equivalent terms. For that the data needs to be standardized and then we can compare the probabiblity of success, p, for each each group.
Note on the Bernoulli Distribution and the Central Limit Theorem
The Bernoulli distribution for the control group is given as
X ~ Bernoulli(p)
where p is the probability of conversion for the control group. According to the Bernoulli Distribution properties, the mean and variance are as follows:
E(X) = p
Var(X) = p(1-p)
According to the Central Limit Theorem, if one calculates many sample means one can approximate the true mean for the population, $\mu$, from which the data for the control group were obtained. The distribution of sample means, p, will be normally distributed around the true mean with an standard deviation equals to the standard error of the mean.
The equation for this standard deviation is given as:
Therefore we can represent both groups with a gaussian distribution with the following properties:
The same can be done for the test group (Variant B). Thus we'll have two normal distributions for p_A and p_B. The conversion from Bernoulli to Gaussian simplifies the rest of the analysis.
# Probabilities (= conversion rates)
p_A = rate_A
p_B = rate_B
# Number of conversions
N_A = total_A
N_B = total_B
# Standard error for the mean in both groups
SE_A = np.sqrt(p_A * (1 - p_A)) / np.sqrt(N_A)
SE_B = np.sqrt(p_B * (1 - p_B)) / np.sqrt(N_B)
# Print
print('Variant A Standard Error:', SE_A)
print('Variant B Standard Error:', SE_B)
# Plotting the normal distribution for the null and alternative hypotheses
fig, ax = plt.subplots(figsize=(14,6))
# Data for the random variable
x = np.linspace(p_A-4*SE_A, p_B+4*SE_B, 100)
# Distribution of A
yA = scs.norm(p_A, SE_A).pdf(x)
ax.plot(x, yA, alpha=0.5, linestyle='-', label='Variant A (Control)')
# Distribution of B
yB = scs.norm(p_B, SE_B).pdf(x)
ax.plot(x, yB, alpha=0.5, linestyle='-', label='Variant B (Test)')
# Labels
ax.legend()
plt.xlabel('Conversion Rate')
plt.ylabel('PDF - Probability Density Function')
plt.show()
The continuous lines represent the normally distributed conversion rates for each group. The distance between the blue line and the red line is equals to the mean difference between the control and test groups.
Variance of the Sum
The null hypothesis states that the probability difference between both groups is zero. Therefore the mean for this normal distribution will be zero. The second property needed for the normal distribution is the standard deviation or variance...
A basic property of the variance is that the variance of the sum of two independent random variables is the sum of their variances:
Var(X + Y) = Var(X) + Var(Y)
Var(X - Y) = Var(X) + Var(Y)
This means that the null hypothesis and the alternative hypothesis will have the same variance, which is the sum of the variances for the control and test groups: The standard deviation can be calculated as: And if we place this equation in terms of the standard deviation for the Bernoulli distribution, we have: and we obtain the Satterthwaite approximation for the pooled standard error. If we calculate the combined probability and use it to calculate the standard deviation for both groups, we obtain: where: Both equations for the combined standard error will give very similar results.
Verifying the Null Hypothesis and the Alternative Hypothesis
The null hypothesis states that the change made in the page design made for the test group would not cause a change in conversion rate.
The alternative hypothesis is the opposing stance, affirming the change made in the page design made for the test group would cause a change in conversion rate.
The null hypothesis will be a normal distribution with mean zero and standard deviation equals to the pooled standard error.
The alternative hypothesis has the same standard deviation as the null hypothesis, but the mean will be the difference in conversion rate, d_hat. This makes sense because we can calculcate the difference in conversion rates directly from the data, while the normal distribution represents possible values that the experiment could have obtained.
Formula for the z-score:
$$ z = \frac{(\bar{x_1}-\bar{x_2})-D_0}{\sqrt{\sigma_1^{2}/n_1+\sigma_{2}^{2}/n_2}} $$$$ z = \frac{(\hat{p_1}-\hat{p_2})-0}{\sqrt{\hat{p}\hat{q}(\frac{1}{n_1}+\frac{1}{n_2})}}$$# Calculating the pooled probability
pooled_prob = (p_A*N_A + p_B*N_B) / (N_A+N_B)
pooled_prob
# Calculating z
z = (p_B-p_A) / (pooled_prob*(1-pooled_prob) * (1/N_A + 1/N_B))**0.5
z
# Veryfing if z is greater than the significance level
# The significance level for an alpha of 0.05 is 1.64
z > 1.64
Creating auxilary funtions to plot the probability distributions
# Function which returns the pooled probability for two samples
def pooled_prob_func(N_A, N_B, X_A, X_B):
return (X_A + X_B) / (N_A + N_B)
# Function which results the pooled standard error for two samples
def pooled_standard_error_func(N_A, N_B, X_A, X_B):
p_hat = pooled_prob_func(N_A, N_B, X_A, X_B)
SE = np.sqrt(p_hat * (1 - p_hat) * (1 / N_A + 1 / N_B))
return SE
# Return the z value for a given significance level
def z_val(sig_level = 0.05, two_tailed = True):
# Generate the distribution for the z value
z_dist = scs.norm()
# Verify if both tails ought to be checked
if two_tailed:
sig_level = sig_level/2
area = 1 - sig_level
else:
area = 1 - sig_level
# Z value
z = z_dist.ppf(area)
return z
# Canculate the confidence interval
def confidence_interval(sample_mean = 0, sample_std = 1, sample_size = 1, sig_level = 0.05):
# Calculate the z value
z = z_val(sig_level)
# Left and right limits
left = sample_mean - z * sample_std / np.sqrt(sample_size)
right = sample_mean + z * sample_std / np.sqrt(sample_size)
return (left, right)
# Função que calcula o intervalo de confiança de duas caudas
# Function to calculate the two-tailed confidence interval
def plot_CI(ax,
mu,
s,
sig_level = 0.05,
color = 'grey'):
# Calculate the confidence interval
left, right = confidence_interval(sample_mean = mu, sample_std = s, sig_level = sig_level)
# Include the interval in the plot
ax.axvline(left, c = color, linestyle = '--', alpha = 0.5)
ax.axvline(right, c = color, linestyle = '--', alpha = 0.5)
# Function to plot a normal distribution
def plot_norm_dist(ax,
mu,
std,
with_CI = False,
sig_level = 0.05,
label = None):
# Generate values for a random variable x
x = np.linspace(mu - 12 * std, mu + 12 * std, 1000)
# Create a normal distribution
y = scs.norm(mu, std).pdf(x)
# Plot
ax.plot(x, y, label = label)
# If there's a confidence internval, include it on the plot
if with_CI:
plot_CI(ax, mu, std, sig_level = sig_level)
# Function to plot the distribution of the null hypothesis
# If there's no real change, the distribution of differences between the control and test groups will be normally distributed
def plot_H0(ax, stderr):
plot_norm_dist(ax, 0, stderr, label = "H0 - Null Hypothesis")
plot_CI(ax, mu = 0, s = stderr, sig_level = 0.05)
# Function to plot the distribution of the alternative hypothesis
# If there's a real change, the distribution of differences between the control and test groups will be normally disitributed
# and centered around d_hat
def plot_H1(ax, stderr, d_hat):
plot_norm_dist(ax, d_hat, stderr, label = "H1 - Alternative Hypothesis")
# Função que retorna um objeto de distribuição dependendo do tipo de grupo
# Function which returns a dsitribution object depending on the type of group
def ab_dist(stderr, d_hat = 0, group_type = 'control'):
# Verify the group type
if group_type == 'control':
sample_mean = 0
elif group_type == 'test':
sample_mean = d_hat
# Create a normal distribution which depends on the mean and standard deviation
dist = scs.norm(sample_mean, stderr)
return dist
# Function to fill between the upper and lower significance limit for the alternative hypothesis
def show_area(ax, d_hat, stderr, sig_level):
# Confidence interval
left, right = confidence_interval(sample_mean = 0, sample_std = stderr, sig_level = sig_level)
# x values
x = np.linspace(-12 * stderr, 12 * stderr, 1000)
# H0
null = ab_dist(stderr, 'control')
# H1
alternative = ab_dist(stderr, d_hat, 'test')
# If the area type is equals to power
# Fill between the upper significance limit and the distribuition for the alternative hypothesis
ax.fill_between(x, 0, alternative.pdf(x), color = 'green', alpha = 0.25, where = (x > right))
ax.text(-3 * stderr, null.pdf(0), 'power = {0:.3f}'.format(1 - alternative.cdf(right)),
fontsize = 12, ha = 'right', color = 'k')
# Function to return the p value
def p_val(N_A, N_B, p_A, p_B):
return scs.binom(N_A, p_A).pmf(p_B * N_B)
# Function to plot the analysis of the A/B Test
def abplot_func(N_A,
N_B,
bcr,
d_hat,
sig_level = 0.05,
show_p_value = False,
show_legend = True,
two_tailed = True):
# Define the plot area
fig, ax = plt.subplots(figsize = (14, 8))
# Define the parameters to find the pooled standard error
X_A = bcr * N_A
X_B = (bcr + d_hat) * N_B
stderr = pooled_standard_error_func(N_A, N_B, X_A, X_B)
# Plot the distribution for the null and alternative hypotheses
plot_H0(ax, stderr)
plot_H1(ax, stderr, d_hat)
# Define the extension of the plot area
ax.set_xlim(-8 * stderr, 8 * stderr)
# Adjusting the plot and fill the inner area
if two_tailed == True:
show_area(ax, d_hat, stderr, sig_level)
else:
show_area(ax, d_hat, stderr, sig_level/2)
# Showing p-values based on the distributions for the groups
if show_p_value:
null = ab_dist(stderr, 'control')
p_value = p_val(N_A, N_B, bcr, bcr + d_hat)
ax.text(3 * stderr, null.pdf(0), 'p-value = {0:.4f}'.format(p_value), fontsize = 14, ha = 'left')
# Show legend
if show_legend:
plt.legend()
plt.xlabel('d')
plt.ylabel('PDF - Probability Density Functions')
plt.show()
# Define the parameters and execute the function
n = N_A + N_B
baseline_conversion = p_A
d_hat = p_B - p_A
abplot_func(N_A, N_B, baseline_conversion, d_hat, show_p_value = True)
Visually the plot for the normalized null and alternative hypotheses is very similar to the other plots above. Since both curves have an identical shape we can simply compare the distance between the means of the distributions. The curve for the alternative hypothesis suggests that the test group has a greater conversion rate when compared to the control group. This plot can also be used to directly determine the statistical power.
The statistical power calculations have been added to the plot above, and the value obtained is 0.998, which is over the desired 0.8 amount.
The shaded green area represents the statistical power and the calculated value can also be shown on the plot. The gray dashed lines on the plot above represent the confidence interval (95% for plot above) for the null hypothesis. The statistical power is calculated by finding the area under the alternative hypothesis curve aand out of the confidence intervall for the null hypothesis.
After executing the experiment we obtain a conversion rate for both groups. If we calculate the difference between the conversion rates we obtain a result that is the difference or effect from the change in wabpage design, not showing the user reviews and comments. The task is then to determine if the finding is resultand from the null or alternative hypothesis.
The area under the curve for the alternative hypothesis is equals to 1. If the alternative design (removal of reviews) is truly superior, then the power is the probability of accepting the alternative hypothesis and rejecting the null hypothesis, and it's equals to the shaded green area (true positive). The opposite area under the alternative curve is the probability of not rejecting the null hypothesis and instead rejecting the alternative hypothesis (false negative). This is the beta on an A/B test.
If the null hypothesis is true and there is no difference between the control and test groups, then the significance level is the probability of rejecting the null hypothesis and accepting the alternative hypothesis (false positive). A false positive is when we erroneously conclude that the new design is better when in reality it isn't. This value is low because we want to limit this probability.
The 95% confidence internval used translates into a significance level of 0.05, which is a common choice of value.
The experiments are usually configurated with a minimal desired potency over 80%. If our new deisng is truly superior, we want the experiment to show that there is at least an 80% probability that this is the case. We know that if the sample size for each group is increased, we'll reduce the combined variance for the hypotheses. This makes the distributions narrower and can increase the statistical power. Next I'll analyse how the sample size affects the findings.
The statistical significance of the findings has already been established. Yet I'm also interested in determining the minimum sample size needed for the experiment. It's useful to know this since it's directly related to how quickly experiments can be completed and consequently how quickly one can generate insights.
# Function to include the z value on the plot
def zplot(area = 0.95, two_tailed = True, align_right = False):
# Create the plot area
fig = plt.figure(figsize = (12, 6))
ax = fig.subplots()
# Create a normal distribution
norm = scs.norm()
# Create the datapoints for the plot
x = np.linspace(-5, 5, 1000)
y = norm.pdf(x)
ax.plot(x, y)
# Code to fill the areas for two-tailed tests
if two_tailed:
left = norm.ppf(0.5 - area / 2)
right = norm.ppf(0.5 + area / 2)
ax.vlines(right, 0, norm.pdf(right), color = 'grey', linestyle = '--')
ax.vlines(left, 0, norm.pdf(left), color = 'grey', linestyle = '--')
ax.fill_between(x, 0, y, color = 'grey', alpha = 0.25, where = (x > left) & (x < right))
plt.xlabel('z')
plt.ylabel('PDF')
plt.text(left, norm.pdf(left), "z = {0:.3f}".format(left),
fontsize = 12,
rotation = 90,
va = "bottom",
ha = "right")
plt.text(right, norm.pdf(right), "z = {0:.3f}".format(right),
fontsize = 12,
rotation = 90,
va = "bottom",
ha = "left")
# For single tailed tests
else:
# Right alignment
if align_right:
left = norm.ppf(1-area)
ax.vlines(left, 0, norm.pdf(left), color = 'grey', linestyle = '--')
ax.fill_between(x, 0, y, color = 'grey', alpha = 0.25, where = x > left)
plt.text(left, norm.pdf(left), "z = {0:.3f}".format(left),
fontsize = 12,
rotation = 90,
va = "bottom",
ha = "right")
# Left alignment
else:
right = norm.ppf(area)
ax.vlines(right, 0, norm.pdf(right), color = 'grey', linestyle = '--')
ax.fill_between(x, 0, y, color = 'grey', alpha = 0.25, where = x < right)
plt.text(right, norm.pdf(right), "z = {0:.3f}".format(right),
fontsize = 12,
rotation = 90,
va = "bottom",
ha = "left")
# IncluÃmos texto no plot
plt.text(0, 0.1, "Shaded Area = {0:.3f}".format(area), fontsize = 12, ha = 'center')
# Labels
plt.xlabel('z')
plt.ylabel('PDF')
plt.show()
# Print z value
print(z)
print(z_val(sig_level = 0.05, two_tailed = False))
print(z > z_val(sig_level = 0.05, two_tailed = False))
# Plot z
zplot(area = 0.95, two_tailed = False, align_right = False)
Equation to find the minimum sample size:
$$ n_A = k*n_B $$$$ n_B = (\frac{p_A(1-p_A)}{k}+p_B(1-p_B)) (\frac{Z_{1-\alpha} + Z_{1-\beta}}{p_A-p_B})^{2}$$$$ n = \frac{2(\bar{p})(1-\bar{p})(Z_{1-\beta}+Z_{1-\alpha})^2}{(p_B-p_A)^2}$$# Calculate values for z, alpha and beta
sig_level = 0.05
beta = 0.2
k = N_A/N_B
standard_norm = scs.norm(0, 1)
Z_beta = standard_norm.ppf(1-beta)
Z_alpha = standard_norm.ppf(1-sig_level)
print(Z_beta)
print(Z_alpha)
Now finding the mimimum sample size.
# Function to find the minimum sample size
def calculate_min_sample_size(N_A,
N_B,
p_A,
p_B,
power = 0.8,
sig_level = 0.05,
two_sided = False):
k = N_A/N_B
# Normal distribution to determine z values
standard_norm = scs.norm(0, 1)
# Find the z value for the statistical power
Z_beta = standard_norm.ppf(power)
# Find z alpha
if two_sided == True:
Z_alpha = standard_norm.ppf(1-sig_level/2)
else:
Z_alpha = standard_norm.ppf(1-sig_level)
# Pooled probability
pooled_prob = (p_A + p_B) / 2
# Minimum sample size
min_N = (2 * pooled_prob * (1 - pooled_prob) * (Z_beta + Z_alpha)**2 / minimum_effect**2)
return min_N
# Calculate the minimum sample size with two_sided = True
min_N = calculate_min_sample_size(N_A, N_B, p_A, p_B, power = 0.8, sig_level = 0.05, two_sided = True)
min_N
# Execute the function with two_sided = True
abplot_func(N_A = min_N,
N_B = min_N,
bcr = p_A,
d_hat = p_B - p_A,
sig_level = 0.05,
show_p_value = False,
show_legend = True)
Calculating the minimum sample size considering the baseline
This is done with the minimum sample size equation instead of using the statistical power method. The idea here is checking if this simpler approach would also achieve an acceptable statistical power.
baseline_conversion + minimum_effect
# Calcualte the pooled probability
pooled_probability = (baseline_conversion + baseline_conversion + minimum_effect) / 2
pooled_probability
# Sum of z alpha and beta
Z_alpha + Z_beta
# Minimum sample size for the baseline
min_N = (2 * pooled_probability * (1 - pooled_probability) * (Z_beta + Z_alpha)**2 / minimum_effect**2)
min_N
Statistical power for the baseline
# Execute the function with min_N
abplot_func(N_A = min_N,
N_B = min_N,
bcr = p_A,
d_hat = p_B - p_A,
sig_level = 0.05,
show_p_value = False,
show_legend = True)
The calculated power for this sample size was of around 0.80. Therefore, in order to affirm that the change in page layout removing user reviews truly increased the conversion rate a minimum of 1249 samples are needed.
After going through all planned analysis steps, we can conclude from the gathered A/B Test samples that removing comments and reviews from the webpage (Variant B, the test) leads to an increased conversion rate, with an statistical power of 0.999.
We also observe that a safe conclusion could have been reached after only 1249 samples had been collected, which would have reduced the test time.