Two-way ANOVA is the natural extension of one-way ANOVA to experiments with two categorical factors (also called treatments, explanatory variables, or fixed effects). It answers:
Let:
Define:
Cell mean: $$\bar{Y}_{ij\cdot}=\frac{1}{n_{ij}}\sum_{k=1}^{n_{ij}}Y_{ijk}.$$
Marginal mean for factor A: $$\bar{Y}_{i\cdot\cdot}=\frac{1}{n_{i\cdot}}\sum_{j=1}^{b}\sum_{k=1}^{n_{ij}}Y_{ijk}, \quad n_{i\cdot}=\sum_{j=1}^{b}n_{ij}.$$
Marginal mean for factor B: $$\bar{Y}_{\cdot j\cdot}=\frac{1}{n_{\cdot j}}\sum_{i=1}^{a}\sum_{k=1}^{n_{ij}}Y_{ijk}, \quad n_{\cdot j}=\sum_{i=1}^{a}n_{ij}.$$
Grand mean: $$\bar{Y}_{\cdot\cdot\cdot}=\frac{1}{N}\sum_{i=1}^{a}\sum_{j=1}^{b}\sum_{k=1}^{n_{ij}}Y_{ijk}, \quad N=\sum_{i=1}^{a}\sum_{j=1}^{b}n_{ij}.$$
The standard fixed-effects model is:
$$ Y_{ijk}=\mu+\alpha_i+\beta_j+(\alpha\beta)_{ij}+\varepsilon_{ijk}, $$where:
The parameters are not unique without constraints because, for example, adding a constant to all $$\alpha_i$$ and subtracting from $$\mu$$ yields the same mean.
A common choice:
$$ \sum_{i=1}^a \alpha_i = 0,\qquad \sum_{j=1}^b \beta_j = 0,\qquad \sum_{i=1}^a (\alpha\beta)_{ij}=0\ \forall j,\qquad \sum_{j=1}^b (\alpha\beta)_{ij}=0\ \forall i. $$These enforce that effects are deviations from overall/marginal means.
Taking expectation:
$$ \mathbb{E}[Y_{ijk}] = \mu+\alpha_i+\beta_j+(\alpha\beta)_{ij}. $$This is the cell mean in the model:
$$ \mu_{ij}:=\mathbb{E}[Y_{ijk}] = \mu+\alpha_i+\beta_j+(\alpha\beta)_{ij}. $$The interaction tests whether the effect of A depends on B.
$$ H_0^{(AB)}:\ (\alpha\beta)_{ij}=0\ \ \text{for all }i,j $$vs
$$ H_1^{(AB)}:\ \text{at least one }(\alpha\beta)_{ij}\neq 0. $$If interaction is significant, interpretation focuses on simple effects (A at each B, or B at each A), not just main effects.
Factor A main effect:
$$ H_0^{(A)}:\ \alpha_1=\cdots=\alpha_a=0 $$Factor B main effect:
$$ H_0^{(B)}:\ \beta_1=\cdots=\beta_b=0 $$Balanced means:
$$ n_{ij}=n\quad\text{for all }i,j. $$Then each cell has the same number of replicates, and formulas simplify dramatically.
Let:
ANOVA is based on decomposing total variability into orthogonal components.
Define:
$$ SS_T=\sum_{i=1}^a\sum_{j=1}^b\sum_{k=1}^n\left(Y_{ijk}-\bar{Y}_{\cdot\cdot\cdot}\right)^2. $$This measures overall variation around the grand mean.
Error variation is variation inside each cell:
$$ SS_E=\sum_{i=1}^a\sum_{j=1}^b\sum_{k=1}^n\left(Y_{ijk}-\bar{Y}_{ij\cdot}\right)^2. $$Why this corresponds to error: Under the model, $$Y_{ijk}-\mathbb{E}[Y_{ijk}]=\varepsilon_{ijk}$$. The cell mean estimates the cell expectation, so deviations from the cell mean estimate pure noise.
For any observation, add and subtract the cell mean and grand mean:
$$ Y_{ijk}-\bar{Y}_{\cdot\cdot\cdot} = \left(Y_{ijk}-\bar{Y}_{ij\cdot}\right) + \left(\bar{Y}_{ij\cdot}-\bar{Y}_{\cdot\cdot\cdot}\right). $$Square and sum:
$$ \sum (Y_{ijk}-\bar{Y}_{\cdot\cdot\cdot})^2 = \sum (Y_{ijk}-\bar{Y}_{ij\cdot})^2 + \sum n(\bar{Y}_{ij\cdot}-\bar{Y}_{\cdot\cdot\cdot})^2 + 2\sum (Y_{ijk}-\bar{Y}_{ij\cdot})(\bar{Y}_{ij\cdot}-\bar{Y}_{\cdot\cdot\cdot}). $$The cross-term vanishes because for each fixed $$(i,j)$$,
$$ \sum_{k=1}^n (Y_{ijk}-\bar{Y}_{ij\cdot})=0. $$Therefore:
$$ SS_T=SS_E+SS_{\text{Cells}}, $$where
$$ SS_{\text{Cells}} = n\sum_{i=1}^a\sum_{j=1}^b(\bar{Y}_{ij\cdot}-\bar{Y}_{\cdot\cdot\cdot})^2. $$Now we further decompose $$SS_{\text{Cells}}$$ into A, B, and interaction.
Define the A marginal means $$\bar{Y}_{i\cdot\cdot}$$. Then:
$$ SS_A=bn\sum_{i=1}^a(\bar{Y}_{i\cdot\cdot}-\bar{Y}_{\cdot\cdot\cdot})^2. $$Why the coefficient $$bn$$? Each $$\bar{Y}_{i\cdot\cdot}$$ averages over $$b$$ cells and $$n$$ replicates per cell, so it represents $$bn$$ observations.
Similarly:
$$ SS_B=an\sum_{j=1}^b(\bar{Y}_{\cdot j\cdot}-\bar{Y}_{\cdot\cdot\cdot})^2. $$Interaction measures deviations of cell means from what would be predicted by adding main effects:
Define the “additive prediction” for cell mean:
$$ \widehat{\mu}^{\text{add}}_{ij} = \bar{Y}_{i\cdot\cdot}+\bar{Y}_{\cdot j\cdot}-\bar{Y}_{\cdot\cdot\cdot}. $$Then interaction SS is:
$$ SS_{AB} = n\sum_{i=1}^a\sum_{j=1}^b \left(\bar{Y}_{ij\cdot}-\bar{Y}_{i\cdot\cdot}-\bar{Y}_{\cdot j\cdot}+\bar{Y}_{\cdot\cdot\cdot}\right)^2. $$Balanced two-way ANOVA gives:
$$ SS_T = SS_A + SS_B + SS_{AB} + SS_E. $$This is the core ANOVA identity.
Let $$N=abn$$.
Total df: $$df_T = N-1 = abn-1.$$
Error df: within each of $$ab$$ cells you lose 1 df estimating the cell mean: $$df_E = ab(n-1).$$
A df: $$df_A = a-1.$$
B df: $$df_B = b-1.$$
Interaction df: $$df_{AB}=(a-1)(b-1).$$
Check:
$$ df_A+df_B+df_{AB}+df_E = (a-1)+(b-1)+(a-1)(b-1)+ab(n-1) = abn-1 = df_T. $$Define mean squares:
$$ MS_A=\frac{SS_A}{df_A},\quad MS_B=\frac{SS_B}{df_B},\quad MS_{AB}=\frac{SS_{AB}}{df_{AB}},\quad MS_E=\frac{SS_E}{df_E}. $$Because $$\varepsilon_{ijk}$$ are i.i.d. normal, sums of squares like $$SS_E/\sigma^2$$ become chi-square random variables. In the balanced fixed-effects model:
For example, under $$H_0^{(A)}$$:
$$ \mathbb{E}[MS_A]=\sigma^2,\qquad \mathbb{E}[MS_E]=\sigma^2. $$Thus the ratio
$$ F_A=\frac{MS_A}{MS_E} $$has an F distribution under the null:
$$ F_A \sim F_{df_A,df_E}\quad\text{under }H_0^{(A)}. $$Similarly:
$$ F_B=\frac{MS_B}{MS_E}\sim F_{df_B,df_E},\qquad F_{AB}=\frac{MS_{AB}}{MS_E}\sim F_{df_{AB},df_E}. $$| Source | SS | df | MS | F |
|---|---|---|---|---|
| Factor A | $$SS_A$$ | $$a-1$$ | $$MS_A$$ | $$F_A=MS_A/MS_E$$ |
| Factor B | $$SS_B$$ | $$b-1$$ | $$MS_B$$ | $$F_B=MS_B/MS_E$$ |
| Interaction AB | $$SS_{AB}$$ | $$(a-1)(b-1)$$ | $$MS_{AB}$$ | $$F_{AB}=MS_{AB}/MS_E$$ |
| Error | $$SS_E$$ | $$ab(n-1)$$ | $$MS_E$$ | — |
| Total | $$SS_T$$ | $$abn-1$$ | — | — |
Sometimes $$n=1$$ (one measurement in each cell). Then:
In that case, the model often used is the additive model (no interaction):
$$ Y_{ij} = \mu + \alpha_i + \beta_j + \varepsilon_{ij}, $$and error is the leftover variability after removing row/column effects:
$$ SS_E = SS_T - SS_A - SS_B. $$Degrees of freedom become:
But note: here $$df_E$$ is mathematically identical to the interaction df, meaning interaction is confounded with error if it exists.
When $$n_{ij}$$ are not equal:
Core concept: with unbalanced designs, main effects are not necessarily orthogonal to interaction and each other, so SS depends on how you “adjust” for other terms.
Two-way ANOVA is a special case of the linear model:
$$ \mathbf{Y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon}, \qquad \boldsymbol{\varepsilon}\sim N(0,\sigma^2 I). $$This is the deep mathematical reason ANOVA works: it’s orthogonal projection + Gaussian quadratic forms.
Two-way ANOVA validity relies on:
Diagnostics commonly used:
Model with interaction: $$Y_{ijk}=\mu+\alpha_i+\beta_j+(\alpha\beta)_{ij}+\varepsilon_{ijk}.$$
Balanced SS decomposition: $$SS_T = SS_A + SS_B + SS_{AB} + SS_E.$$
Mean squares: $$MS = SS/df.$$
F tests: $$F_A=\frac{MS_A}{MS_E},\quad F_B=\frac{MS_B}{MS_E},\quad F_{AB}=\frac{MS_{AB}}{MS_E}.$$
Always test interaction first; if significant, interpret simple effects.
This section explains why ANOVA sums of squares decompose as they do, and why F-tests follow F-distributions.
Everything is a consequence of orthogonal projections in Euclidean space and quadratic forms of Gaussian vectors.
Two-way ANOVA is a special case of the linear model
$$ \mathbf{Y} = \mathbf{X}\boldsymbol{\beta} + \boldsymbol{\varepsilon}, \qquad \boldsymbol{\varepsilon} \sim N(\mathbf{0}, \sigma^2 \mathbf{I}_N), $$where:
For two-way ANOVA with interaction, the columns of $$\mathbf{X}$$ span:
Formally, the column space decomposes as
$$ \mathcal{C}(\mathbf{X}) = \mathcal{S}_\mu \;\oplus\; \mathcal{S}_A \;\oplus\; \mathcal{S}_B \;\oplus\; \mathcal{S}_{AB}, $$where each subspace corresponds to one effect.
Balanced designs guarantee orthogonality of these subspaces.
Let:
Then:
$$ \mathbf{I} = \mathbf{P}_\mu + \mathbf{P}_A + \mathbf{P}_B + \mathbf{P}_{AB} + \mathbf{P}_E $$with mutually orthogonal projections.
Each sum of squares is a squared Euclidean norm of a projection:
Total SS: $$ SS_T = \|\mathbf{Y} - \bar{Y}\mathbf{1}\|^2 $$
Factor A: $$ SS_A = \|\mathbf{P}_A \mathbf{Y}\|^2 $$
Factor B: $$ SS_B = \|\mathbf{P}_B \mathbf{Y}\|^2 $$
Interaction: $$ SS_{AB} = \|\mathbf{P}_{AB} \mathbf{Y}\|^2 $$
Error: $$ SS_E = \|\mathbf{P}_E \mathbf{Y}\|^2 $$
Because the projections are orthogonal,
$$ SS_T = SS_A + SS_B + SS_{AB} + SS_E. $$This identity is pure Pythagoras in $\mathbb{R}^N$.
For any projection matrix $$\mathbf{P}$$:
The degrees of freedom are:
$$ df = \text{rank}(\mathbf{P}) = \dim(\text{corresponding subspace}) $$Thus:
These are dimensions of orthogonal subspaces, not ad-hoc counts.
Let $$\boldsymbol{\varepsilon} \sim N(\mathbf{0}, \sigma^2 \mathbf{I})$$
and let $$\mathbf{P}$$ be an idempotent matrix of rank $$r$$.
Then the quadratic form
$$ \frac{1}{\sigma^2}\boldsymbol{\varepsilon}^\top \mathbf{P} \boldsymbol{\varepsilon} \sim \chi^2_r. $$This is the fundamental theorem behind ANOVA.
Because:
$$ SS_E = \boldsymbol{\varepsilon}^\top \mathbf{P}_E \boldsymbol{\varepsilon}, \qquad \mathbf{P}_E \text{ has rank } df_E, $$we get:
$$ \frac{SS_E}{\sigma^2} \sim \chi^2_{df_E}. $$Similarly, under the null hypothesis for any effect (A, B, AB):
$$ \frac{SS_{\text{effect}}}{\sigma^2} \sim \chi^2_{df_{\text{effect}}}. $$Dividing by degrees of freedom gives:
$$ \mathbb{E}[MS_{\text{effect}}] = \sigma^2, \qquad \mathbb{E}[MS_E] = \sigma^2 $$under the corresponding null hypothesis.
For example, for factor A:
$$ F_A = \frac{MS_A}{MS_E} = \frac{(SS_A/df_A)}{(SS_E/df_E)}. $$Since:
we obtain:
$$ F_A \sim F_{df_A, df_E} \quad \text{under } H_0^{(A)}. $$Identical logic applies to $$F_B$$ and $$F_{AB}$$.
Balanced designs guarantee:
In unbalanced designs:
This is a geometric, not computational, issue.
Two-way ANOVA is:
Nothing in ANOVA is heuristic — every formula is a theorem in linear algebra and probability.
ANOVA works because the data vector is decomposed into orthogonal subspaces, and Gaussian noise projected onto these subspaces produces independent chi-square variables whose ratios follow F-distributions.
We use a balanced $2\times 2$ design with replication ($n=2$ per cell).
| $B_1$ | $B_2$ | |
|---|---|---|
| $A_1$ | $10,12$ | $20,22$ |
| $A_2$ | $20,22$ | $10,12$ |
Cell means: $$ \bar Y_{11\cdot}=11,\quad \bar Y_{12\cdot}=21,\quad \bar Y_{21\cdot}=21,\quad \bar Y_{22\cdot}=11 $$
Row (A) means: $$ \bar Y_{1\cdot\cdot}=\frac{11+21}{2}=16,\qquad \bar Y_{2\cdot\cdot}=\frac{21+11}{2}=16 $$
Column (B) means: $$ \bar Y_{\cdot1\cdot}=\frac{11+21}{2}=16,\qquad \bar Y_{\cdot2\cdot}=\frac{21+11}{2}=16 $$
Grand mean: $$ \bar Y_{\cdot\cdot\cdot}=\frac{16+16}{2}=16 $$
Interaction: $$ H_0^{(AB)}:\ (\alpha\beta)_{ij}=0\ \forall i,j \qquad\text{vs}\qquad H_1^{(AB)}:\ \exists(i,j)\text{ with }(\alpha\beta)_{ij}\ne 0 $$
Main effects: $$ H_0^{(A)}:\alpha_1=\alpha_2=0,\qquad H_0^{(B)}:\beta_1=\beta_2=0 $$
Compute per cell (all identical):
Thus: $$ SS_E = 2+2+2+2 = 8 $$
Balanced formula: $$ SS_A = bn\sum_{i=1}^2(\bar Y_{i\cdot\cdot}-\bar Y_{\cdot\cdot\cdot})^2 $$ Here $bn=2\cdot 2=4$ and both row means equal the grand mean ($16$), so: $$ SS_A = 4\left[(16-16)^2+(16-16)^2\right]=0 $$
Here $an=2\cdot 2=4$ and both column means equal $16$, so: $$ SS_B = 0 $$
Balanced formula: $$
n\sum{i=1}^2\sum{j=1}^2 \left(\bar Y{ij\cdot}-\bar Y{i\cdot\cdot}-\bar Y{\cdot j\cdot}+\bar Y{\cdot\cdot\cdot}\right)^2 $$
Since $\bar Y_{i\cdot\cdot}=\bar Y_{\cdot j\cdot}=\bar Y_{\cdot\cdot\cdot}=16$, the bracket simplifies to: $$ \bar Y_{ij\cdot}-16 $$
So: $$
2\left[(11-16)^2+(21-16)^2+(21-16)^2+(11-16)^2\right] $$
$$ = 2\left[25+25+25+25\right] = 2\cdot 100 = 200 $$The two-way ANOVA decomposition is: $$ SS_T = SS_A + SS_B + SS_{AB} + SS_E $$ Thus: $$ SS_T = 0+0+200+8 = 208 $$
Total: $$ df_T=N-1=7 $$
(And $1+1+1+4=7$ checks out.)
F tests: $$ F_A=\frac{MS_A}{MS_E}=0,\qquad F_B=\frac{MS_B}{MS_E}=0,\qquad F_{AB}=\frac{MS_{AB}}{MS_E}=\frac{200}{2}=100 $$
| Source | SS | df | MS | F |
|---|---|---|---|---|
| A | $0$ | $1$ | $0$ | $0$ |
| B | $0$ | $1$ | $0$ | $0$ |
| A×B | $200$ | $1$ | $200$ | $100$ |
| Error | $8$ | $4$ | $2$ | — |
| Total | $208$ | $7$ | — | — |
This is the classic cross-over interaction: the effect of $A$ depends entirely on the level of $B$ (and vice versa).
import numpy as np
import pandas as pd
from scipy.stats import f
def two_way_anova_with_interaction(data, alpha=0.05):
"""
Two-Way ANOVA WITH interaction for a balanced design with replication.
Parameters
----------
data : dict
Nested dict of the form:
data[A_level][B_level] = list (replicates in that cell)
Example:
{
"A1": {"B1": [..], "B2": [..]},
"A2": {"B1": [..], "B2": [..]}
}
Requirements:
- Same B levels for every A level
- Same number of replicates n in every cell (balanced)
- Replication required: n >= 2
alpha : float
Significance level (for critical values)
Returns
-------
dict with:
- anova_table (pandas DataFrame)
- means (grand, row, col, cell)
- ss (SS_A, SS_B, SS_AB, SS_E, SS_T)
- dfs, ms, F, p-values, critical values
"""
# ---------- Parse levels ----------
A_levels = list(data.keys())
if len(A_levels) < 2:
raise ValueError("Need at least 2 levels of factor A.")
B_levels = list(data[A_levels[0]].keys())
if len(B_levels) < 2:
raise ValueError("Need at least 2 levels of factor B.")
# ---------- Validate structure and balance ----------
a = len(A_levels)
b = len(B_levels)
# Check all A levels contain same B levels
for alev in A_levels:
if set(data[alev].keys()) != set(B_levels):
raise ValueError("All A levels must contain the same set of B levels.")
# Check balanced replication n per cell
n_list = []
for alev in A_levels:
for blev in B_levels:
cell = data[alev][blev]
if not isinstance(cell, (list, tuple, np.ndarray)) or len(cell) == 0:
raise ValueError(f"Cell ({alev}, {blev}) must be a non-empty list of replicates.")
n_list.append(len(cell))
if len(set(n_list)) != 1:
raise ValueError("Balanced design required: all cells must have the same number of replicates.")
n = n_list[0]
if n < 2:
raise ValueError("Replication required: each cell must have n >= 2 for SS_E and interaction testing.")
N = a * b * n
# ---------- Build arrays for easier computation ----------
# y[i,j,k]
y = np.zeros((a, b, n), dtype=float)
for i, alev in enumerate(A_levels):
for j, blev in enumerate(B_levels):
y[i, j, :] = np.array(data[alev][blev], dtype=float)
# ---------- Means ----------
grand_mean = y.mean()
cell_means = y.mean(axis=2) # shape (a,b)
row_means = y.mean(axis=(1, 2)) # shape (a,)
col_means = y.mean(axis=(0, 2)) # shape (b,)
# ---------- Sums of Squares ----------
# Total
SS_T = np.sum((y - grand_mean) ** 2)
# Error (within-cell)
SS_E = np.sum((y - cell_means[:, :, None]) ** 2)
# A
SS_A = b * n * np.sum((row_means - grand_mean) ** 2)
# B
SS_B = a * n * np.sum((col_means - grand_mean) ** 2)
# Interaction
SS_AB = n * np.sum((cell_means - row_means[:, None] - col_means[None, :] + grand_mean) ** 2)
# Check decomposition (numerical tolerance)
# SS_T should equal SS_A + SS_B + SS_AB + SS_E (balanced with interaction)
# We'll not raise error; we return the discrepancy.
discrepancy = SS_T - (SS_A + SS_B + SS_AB + SS_E)
# ---------- Degrees of Freedom ----------
df_A = a - 1
df_B = b - 1
df_AB = (a - 1) * (b - 1)
df_E = a * b * (n - 1)
df_T = N - 1
# ---------- Mean Squares ----------
MS_A = SS_A / df_A
MS_B = SS_B / df_B
MS_AB = SS_AB / df_AB
MS_E = SS_E / df_E
# ---------- F statistics ----------
F_A = MS_A / MS_E
F_B = MS_B / MS_E
F_AB = MS_AB / MS_E
# ---------- p-values ----------
p_A = 1 - f.cdf(F_A, df_A, df_E)
p_B = 1 - f.cdf(F_B, df_B, df_E)
p_AB = 1 - f.cdf(F_AB, df_AB, df_E)
# ---------- Critical values (right tail) ----------
Fcrit_A = f.ppf(1 - alpha, df_A, df_E)
Fcrit_B = f.ppf(1 - alpha, df_B, df_E)
Fcrit_AB = f.ppf(1 - alpha, df_AB, df_E)
# ---------- ANOVA table (USE np.nan instead of "") ----------
anova_df = pd.DataFrame(
[
["Factor A", SS_A, df_A, MS_A, F_A, p_A, Fcrit_A],
["Factor B", SS_B, df_B, MS_B, F_B, p_B, Fcrit_B],
["A×B", SS_AB, df_AB, MS_AB, F_AB, p_AB, Fcrit_AB],
["Error", SS_E, df_E, MS_E, np.nan, np.nan, np.nan],
["Total", SS_T, df_T, np.nan, np.nan, np.nan, np.nan],
],
columns=["Source", "SS", "df", "MS", "F", "p-value", "F_crit"]
)
# Nice packaging
return {
"anova_table": anova_df,
"means": {
"grand_mean": grand_mean,
"row_means": dict(zip(A_levels, row_means)),
"col_means": dict(zip(B_levels, col_means)),
"cell_means": {
(A_levels[i], B_levels[j]): cell_means[i, j]
for i in range(a) for j in range(b)
}
},
"ss": {"SS_A": SS_A, "SS_B": SS_B, "SS_AB": SS_AB, "SS_E": SS_E, "SS_T": SS_T},
"dfs": {"df_A": df_A, "df_B": df_B, "df_AB": df_AB, "df_E": df_E, "df_T": df_T},
"ms": {"MS_A": MS_A, "MS_B": MS_B, "MS_AB": MS_AB, "MS_E": MS_E},
"F": {"F_A": F_A, "F_B": F_B, "F_AB": F_AB},
"p_values": {"p_A": p_A, "p_B": p_B, "p_AB": p_AB},
"F_crit": {"Fcrit_A": Fcrit_A, "Fcrit_B": Fcrit_B, "Fcrit_AB": Fcrit_AB},
"decomposition_discrepancy": discrepancy
}
# Input (same as before)
data = {
"A1": {"B1": [10, 12], "B2": [20, 22]},
"A2": {"B1": [20, 22], "B2": [10, 12]},
}
alpha = 0.05
res = two_way_anova_with_interaction(data, alpha=alpha)
anova_df = res["anova_table"].copy()
display(
anova_df.style
.format(
{
"SS": "{:.4f}",
"MS": "{:.4f}",
"F": "{:.4f}",
"p-value": "{:.6g}",
"F_crit": "{:.4f}",
},
na_rep="" # show NaNs as blank
)
)
print("Decomposition discrepancy SS_T - (SS_A+SS_B+SS_AB+SS_E) =",
float(res["decomposition_discrepancy"]))
| Source | SS | df | MS | F | p-value | F_crit | |
|---|---|---|---|---|---|---|---|
| 0 | Factor A | 0.0000 | 1 | 0.0000 | 0.0000 | 1 | 7.7086 |
| 1 | Factor B | 0.0000 | 1 | 0.0000 | 0.0000 | 1 | 7.7086 |
| 2 | A×B | 200.0000 | 1 | 200.0000 | 100.0000 | 0.000562004 | 7.7086 |
| 3 | Error | 8.0000 | 4 | 2.0000 | |||
| 4 | Total | 208.0000 | 7 |
Decomposition discrepancy SS_T - (SS_A+SS_B+SS_AB+SS_E) = 0.0