Integration

Measurable transformations

Let `(Omega, F, P)` be a probability space. Then `f: Omega -> RR` is called a random variable if `X^(-1)(-oo, a] = {omega: X(omega} <= a} in F`. This is equivalent to the stronger condition that `X^(-1)(A) in F` for all `A in B(RR)`.

A mapping `T: Omega_1 -> Omega_2` between two probability spaces is measurable with respect to the `sigma`-algebras if `T^(-1)(A) in F_1 AA A in F_2`. (and will be measurable for any `barF_1 sup F_1` and `barF_2 sub F_2`).

In general, this is a difficult property to prove directly, but:

And if `AA n in N`, `f_n : Omega -> barRR` is `(:F, B(barRR) :)` measurable, then:

Let `{ f_lambda: lambda in Lambda }` be a family of mappings from `Omega_1 -> Omega_2`, and `F_2` a `sigma`-algebra on `Omega_2`, then `sigma(: {f_lambda^{-1} (A) : A in F_2, lambda in Lambda} :)` is called the `sigma`-algebra generated by `{f_lambda: lambda in Lambda}`. This is the smallest `sigma`-algebra such that all `f_lambda` are measurable.

Let `{f_lambda: lambda in Lambda}` be an uncountable collection of maps `Omega_1 -> Omega_2` then `AA B in sigma(: {f_lambda: lambda in Lambda} :)` then exists a countable set `Lambda_B in Lambda` st. `B in sigma(: {f_lambda: lambda in Lambda_B} :)` (ie. a countable cover)

Induced measures, distribution functions

Suppose X is an rv defined on `(Omega, F, P)`, then `P` governs probabilities assigned to events like `X^(-1)[a,b]`. Since `X` takes values in the real line, it should be possible to express such probabilities as function of `[a, b]`, eg. `P_X(A) = P(X^(-1)(A))`. Is this a probability measure?

Let `(Omega_i, F_i)`, `i=1,2` be measurable spaces, and let `T: Omega_1 -> Omega_2` be a `(: F_1, F_2 :)` measurable function. Then for any measure `mu` on `(Omega_1, Omega_2)`, `mu_T = mu T^(-1) (A) = mu(T^(-1) (A))` is a measure on `F_2`.

For a rv `X` defined on `(Omega, F, P)` the probability distribution of X is the induced measure of `X` under `P` on `RR`.

The cumulative distribution function (CDF) = `F_X(x) = P_X(-oo, x] = P{omega: X(omega) <= x}`, and has the following properties: (any function with these properties is called a cdf)

Given any cdf it is possible to construct a probability space and a random variable st. it is the cdf of X. (eg. `(RR, B(RR), mu_F)`, `f(x) = x`).

A random variable is discrete if there exists a countable set `A sub RR` st `P(X in A) = 1`, or continuous if `P(X=x)=0 AA x in RR`. The decomposition theorem states that any cdf can be written as the weighted sum of a continuous and a discrete cdf.

Generalisation to higher dimensions is straightforward, eg. for `k=2`

Integration

Let `(Omega, F, mu)` be a measure space, and `f: Omega -> R` be a measurable function.

A function `f: Omega -> barRR` is called simple if there exists a finite set `{c_1, c_2, ..., c_k} in barRR` and sets `A_1, A_2, ..., A_k in F, k in NN` st. `f = sum c_i I_(A_i)`. The intergal of a simple function `int_Omega f dmu = sum c_i mu(A_i)`. Note: `0 <= int f dmu <= oo`

If `f` and `g` are two simple non-negative functions then:

Can extend this definition to any non-negative function by discretising and taking limits. (Need to confirm that all admissable sequences take the same form) ie. `lim_(n->oo) int f_n dmu = int lim_(n->oo) f_n dmu = int f mu`

The monotone convergence theorem proves that `int f dmu = lim_(n->oo) int f_n dmu`, for any increasing sequence of `f_n`. Corollaries:

Can extend to any function by defining `f = f^+ - f^-` (where `f^+ = max(0, f)`, `f^- = max(0, -f)`), or more generally `f = f_1 + f_2` for some `f_1, f_2 >= 0`. Said to be integrable if `int |f| < oo`.

Theory Hypotheses Results
MCT `f_n >= 0`
`f_n uarr f "ae" (mu)`
`int f_n uArr int f`
Fatou `f_n >= 0` `ullim int f_n >= int ullim f_n`
LDCT `|f_n| <=g`
`g in L^1(mu)`
`f_n -> f`
`f in L^1(mu)`
`int f_n -> int f`
`int|f_n - f| -> 0`
BCT `mu(Omega) < oo`
`|f_n| <= K`
`f_n -> f`
`f in L^1(mu)`
`int |f_n - f| -> 0`

`L^p` spaces

If `f` and `g` are integrable (ie. `in L^1(Omega, F, P)`) then:

Riemann and Lebesgue integrals

Let `f` be a real value function bounded on a bounded interval. Let `P = {x_0, x_1, ..., x_n}` be a finite partition of `[a,b]`, and `Delta = Delta(P) = max{(x_1 - x_0), ...}` be the diameter of `P`. Let `M_i = spr{f(x): x_i <= x <= x_(i+1)}` and `m_i = inf{f(x): x_i <= x <= x_(i+1)}`.

The upper and lower Riemann sums of `f` wrt `P` are defined as:

The upper and lower Rieman integrals are defined as:

Where `sfP` is the set of all partitions.

`f` is said to be Riemann-integrable if `bar int f = ul int f`. `f` is a bounded function on a bounded interval. `f` is Riemann integrable on `[a,b]` if `f` is continuous on `[a,b]` and `f` is Lebesgue integrable and the two integrals are equal.

More on convergence

NameSymbolDefinition
Pointwise `f_n -> f` `lim_(n->oo) f_n(omega) = f(omega) AA omega in Omega`
Almost everywhere `f_n -> f "ae" mu` `lim_(n->oo) f_n(omega) = f(omega) AA omega in B^c, mu(B) = 0`
In measure `f_n stackrel m -> f` `lim_(n->oo) mu{|f_n - f| > epsi} = 0`
In `L^p` `f_n stackrel (L^p) -> f` `lim_(n->oo) int |f_n - f|^p = 0` or `{:|| f_n - f||:}_p -> 0` where `{:|| g ||:}_p = (int |g|^p dmu)^(1/p)`
Uniformly `lim_(n->oo) spr_{omega in Omega} {|f_n(omega) - f(omega)|} = 0`
Nearly (almost) uniformly `AA epsilon > 0 EE A in F "st" mu(A) lt epsilon` and on `A^c` f converges uniformly

Uniform integrability

Let `a_f(t) = int_{|f| > t} |f|`. If `f in L^1` then `a_lambda(t) -> 0`, that is `AA epsilon > 0 EE t_epsilon "st" t >= t_epsilon => a_f(t)`. A collection of functions `{f_lambda : lambda in Lambda}` is uniformly integrable if `AA epsilon > 0 EE t_epsi "st" t > t_epsi => spr_(lambda in Lambda) a_(f_lambda)(t) <= epsilon`.

If `mu(Omega) < oo` and let `{f_n : n >= 1} sub L^1` st `f_n -> f "ae" (mu)`, f measurable. If `{f_n}_(n>=1)` is UI then `lim_(n->oo) int |f_n - f| dmu = 0`

Ergorov’s theorem: Let `f_n -> f "ae" (mu)`, `mu(Omega) < oo`, then `f_n -> f` almost uniformly.