Lecture - Uncertainty in Deep Learning MT25, Bayesian probability theory


This lecture derived the laws of probability theory from the requirements for a set of beliefs to be rational. The lecture proved this: suppose

  • $X$ is a sample space of events, i.e. a set of possible outcomes
  • $A \subseteq X$ is an event, which is a subset of $X$
  • $b _ A$ is your “belief” in event $A$, defined as the price you would be willing to buy or sell a unit wager that $A$ happens. This means ${ b _ A \mid A \subseteq X}$ assigns a number to each subset of $X$.

Then:

  • ${ b _ A } _ {A \subseteq X}$ are a set of rational beliefs if and only if $\mathbb P(A) := b _ A$ satisfies the laws of probability theory

What does it mean for a set of beliefs to be rational? Roughly, it means that there is no Dutch book against that set of wagers; i.e. a sequence of purchases and sells of unit those unit wagers that means the bookkeeper always lose money.

What are the laws of probability theory? Roughly, ${ b _ A } _ {A \subseteq X}$ is a set of rational beliefs if:

  • $0 \le b _ A \le 1$ for all $A \subseteq X$
  • $b _ X = 1$
  • Two disjoint events satisfy $b _ {A \cup B} = b _ A + b _ B$ (in fact, you need to extend this to countable additivity)

For ML models to be rational, they need to be obey the laws of probability theory.




Related posts