Probabilistic Graphical Models

Represent joint distributions with graphs, factorize for sanity, infer with structure.

"Nodes whisper truths, edges carry clues; factorize the world, choose your views."

🧩

Representation

Graphs capture dependencies so we don’t list $2^8$ states for $P(X_1,\dots,X_8)$. Factorization reduces parameters and boosts interpretability.

Full joint

$P(X_1,\dots,X_8)$ with $2^8$ configurations if binary.

Factorization idea

Use a DAG with parents $\text{pa}(X_i)$, then $P(\mathbf{X})=\prod_i P(X_i \mid \text{pa}(X_i))$.

🔗

Conditional independence: X ⟂ Y | Z

If $X$ and $Y$ are conditionally independent given $Z$, factor as $P(X,Y,Z)=P(X|Z)P(Y|Z)P(Z)$.

Mini-graphic (text)

Z → X, Z → Y (a V-structure flipped).

Edges carry dependencies, Z shields X,Y.

Factorized joint

$$P(X,Y,Z) = P(X|Z)P(Y|Z)P(Z)$$

🧱

Observed, hidden, and a factor

Observed $X_1,X_2$ depend on hidden $Z_1,Z_2$; a factor $F$ influences $Z_1,Z_2$; $Z_1$ connects to $Z_2$.

Dependency list

  • $X_1 \mid Z_1$
  • $X_2 \mid Z_2$
  • $Z_1 \mid F$
  • $Z_2 \mid F$

Joint factorization

$$P(X_1,X_2,Z_1,Z_2,F) = P(X_1|Z_1)P(X_2|Z_2)P(Z_1|F)P(Z_2|F)P(F)$$

Pick CPTs or potentials to instantiate the model.

Diagram note: think rectangles for factors, circles for variables; arrows show parent-child dependencies.

🧭

Inference & estimation

Poem to remember

"Sum over worlds, or pass the mail,
Messages fly so sums don’t fail.
Priors whisper what we know,
Graphs make hidden patterns show."

🧠 Quick Quiz

Which factorization matches the X–Z–Y graph (Z → X, Z → Y)?

True / False speed round

🧪

Mini Lab: pick an inference tool

Click a chip to see how you’d use it on a PGM.

Choose a tool to see an idea.

Remember the recipe

"Draw the graph, note who talks;
Factor joints into local blocks.
Sum the hidden, pass a note;
Priors steady every vote."

← Back to Deep Learning