文档介绍:Math 323 Fall 2001
Richard Bass
1. Basics of probability.
In this section we give some preliminaries on probabilistic terminology, indepen-
dence, and Gaussian random variables.
Given a space Ω and a σ-field (or σ-algebra) F on it, a probability measure, or
just a probability, is a positive finite measure P with total mass 1. (Ω, F, P) is called a
probability space. Elements of F are called events. Measurable functions from Ω to R are
called random variables and are usually denoted X or Y instead of f or g. The integral
of X with respect to P is called the expectation of X or the expected value of X, and
R P E R P E
X(ω) (dω) is often written X, while A X(ω) (dω) is often written [X; A]. If an
event occurs with probability one, one says “almost surely” and writes . The indicator
of the set A is the function or random variable denoted 1A that is 1 on A and 0 on the
complement. plement of A is denoted Ac.
If An is a sequence of sets, (An .), read “infinitely often,” is defined to be
∞∞
∩j=1 ∪n=j An. The following easy fact is called the Borel-Cantelli lemma or sometimes the
first half of the Borel-Cantelli lemma.
P∞ P P
Proposition . If n=1 (An) < ∞, then (.) = 0.
P P ∞ P∞ P P ∞
Proof. Note (An .) = limj→∞(∪n=jAn). If n=1 (An) < ∞, then (∪n=jAn) ≤
P∞ P
n=j (An) → 0 as j →∞.
Chebyshev’s inequality is the following inequality.
Proposition . If X ≥ 0, ., then
P(X ≥ a) ≤ EX/a.
Proof. This follows from
P(X ≥ a) = E [1(X≥a)] ≤ E [X/a; X ≥ a] ≤ E X/a.
Jensen’s inequality is the following.
1
Proposition . If g is convex and X and g(X) are integrable, then
E g(X) ≥ g(E X).
Proof. If g is convex, then g lies above all its tangent lines. So for each x0, there exists
c such that g(X) ≥ g(x0) + c(X − x0). Letting x0 = E X and taking expectations on both
sides, we obtain our result.
The law or distribution of X is the probability measure PX on R, given by
PX (A) = P(X ∈ A). ()
Given any measure µ on