Doob's martingale convergence theorems

Doob's martingale convergence theorems In mathematics – specifically, in the theory of stochastic processes – Doob's martingale convergence theorems are a collection of results on the limits of supermartingales, named after the American mathematician Joseph L. Doob.[1] Informally, the martingale convergence theorem typically refers to the result that any supermartingale satisfying a certain boundedness condition must converge. One may think of supermartingales as the random variable analogues of non-increasing sequences; from this perspective, the martingale convergence theorem is a random variable analogue of the monotone convergence theorem, which states that any bounded monotone sequence converges. There are symmetric results for submartingales, which are analogous to non-decreasing sequences.

Contents 1 Statement for discrete-time martingales 1.1 Proof sketch 1.2 Failure of convergence in mean 2 Statements for the general case 2.1 Doob's first martingale convergence theorem 2.2 Doob's second martingale convergence theorem 3 Doob's upcrossing inequality 4 Applications 4.1 Convergence in Lp 4.2 Lévy's zero–one law 5 See also 6 References Statement for discrete-time martingales A common formulation of the martingale convergence theorem for discrete-time martingales is the following. Let {displaystyle X_{1},X_{2},X_{3},dots } be a supermartingale. Suppose that the supermartingale is bounded in the sense that {displaystyle sup _{tin mathbf {N} }operatorname {E} [X_{t}^{-}]0} then {displaystyle Y_{n+1}=Y_{n}pm 1} , so {displaystyle Y} is almost surely zero.

This means that {displaystyle operatorname {E} [Y]=0} . However, {displaystyle operatorname {E} [Y_{n}]=1} for every {displaystyle ngeq 1} , since {displaystyle (Y_{n})_{nin mathbf {N} }} is a random walk which starts at {displaystyle 1} and subsequently makes mean-zero moves (alternately, note that {displaystyle operatorname {E} [Y_{n}]=operatorname {E} [Y_{0}]=1} since {displaystyle (Y_{n})_{nin mathbf {N} }} is a martingale). Therefore {displaystyle (Y_{n})_{nin mathbf {N} }} cannot converge to {displaystyle Y} in mean. Moreover, if {displaystyle (Y_{n})_{nin mathbb {N} }} were to converge in mean to any random variable {displaystyle R} , then some subsequence converges to {displaystyle R} almost surely. So by the above argument {displaystyle R=0} almost surely, which contradicts convergence in mean.

Statements for the general case In the following, {displaystyle (Omega ,F,F_{*},mathbf {P} )} will be a filtered probability space where {displaystyle F_{*}=(F_{t})_{tgeq 0}} , and {displaystyle N:[0,infty )times Omega to mathbf {R} } will be a right-continuous supermartingale with respect to the filtration {displaystyle F_{*}} ; in other words, for all {displaystyle 0leq sleq t<+infty } , {displaystyle N_{s}geq operatorname {E} {big [}N_{t}mid F_{s}{big ]}.} Doob's first martingale convergence theorem Doob's first martingale convergence theorem provides a sufficient condition for the random variables {displaystyle N_{t}} to have a limit as {displaystyle tto +infty } in a pointwise sense, i.e. for each {displaystyle omega } in the sample space {displaystyle Omega } individually. For {displaystyle tgeq 0} , let {displaystyle N_{t}^{-}=max(-N_{t},0)} and suppose that {displaystyle sup _{t>0}operatorname {E} {big [}N_{t}^{-}{big ]}<+infty .} Then the pointwise limit {displaystyle N(omega )=lim _{tto +infty }N_{t}(omega )} exists and is finite for {displaystyle mathbf {P} } -almost all {displaystyle omega in Omega } .[3] Doob's second martingale convergence theorem It is important to note that the convergence in Doob's first martingale convergence theorem is pointwise, not uniform, and is unrelated to convergence in mean square, or indeed in any Lp space. In order to obtain convergence in L1 (i.e., convergence in mean), one requires uniform integrability of the random variables {displaystyle N_{t}} . By Chebyshev's inequality, convergence in L1 implies convergence in probability and convergence in distribution. The following are equivalent: {displaystyle (N_{t})_{t>0}} is uniformly integrable, i.e. {displaystyle lim _{Cto infty }sup _{t>0}int _{{omega in Omega ,mid ,|N_{t}(omega )|>C}}left|N_{t}(omega )right|,mathrm {d} mathbf {P} (omega )=0;} there exists an integrable random variable {displaystyle Nin L^{1}(Omega ,mathbf {P} ;mathbf {R} )} such that {displaystyle N_{t}to N} as {displaystyle tto infty } both {displaystyle mathbf {P} } -almost surely and in {displaystyle L^{1}(Omega ,mathbf {P} ;mathbf {R} )} , i.e. {displaystyle operatorname {E} left[left|N_{t}-Nright|right]=int _{Omega }left|N_{t}(omega )-N(omega )right|,mathrm {d} mathbf {P} (omega )to 0{text{ as }}tto +infty .} Doob's upcrossing inequality The following result, called Doob's upcrossing inequality or, sometimes, Doob's upcrossing lemma, is used in proving Doob's martingale convergence theorems.[3] A "gambling" argument shows that for uniformly bounded supermartingales, the number of upcrossings is bounded; the upcrossing lemma generalizes this argument to supermartingales with bounded expectation of their negative parts.

Let {displaystyle N} be a natural number. Let {displaystyle (X_{n})_{nin mathbf {N} }} be a supermartingale with respect to a filtration {displaystyle ({mathcal {F}}_{n})_{nin mathbf {N} }} . Let {displaystyle a} , {displaystyle b} be two real numbers with {displaystyle a0}operatorname {E} {big [}{big |}M_{t}{big |}^{p}{big ]}<+infty } for some {displaystyle p>1} . Then there exists a random variable {displaystyle Min L^{p}(Omega ,mathbf {P} ;mathbf {R} )} such that {displaystyle M_{t}to M} as {displaystyle tto +infty } both {displaystyle mathbf {P} } -almost surely and in {displaystyle L^{p}(Omega ,mathbf {P} ;mathbf {R} )} .

The statement for discrete-time martingales is essentially identical, with the obvious difference that the continuity assumption is no longer necessary.

Lévy's zero–one law Doob's martingale convergence theorems imply that conditional expectations also have a convergence property.

Let {displaystyle (Omega ,F,mathbf {P} )} be a probability space and let {displaystyle X} be a random variable in {displaystyle L^{1}} . Let {displaystyle F_{*}=(F_{k})_{kin mathbf {N} }} be any filtration of {displaystyle F} , and define {displaystyle F_{infty }} to be the minimal σ-algebra generated by {displaystyle (F_{k})_{kin mathbf {N} }} . Then {displaystyle operatorname {E} {big [}Xmid F_{k}{big ]}to operatorname {E} {big [}Xmid F_{infty }{big ]}{text{ as }}kto infty } both {displaystyle mathbf {P} } -almost surely and in {displaystyle L^{1}} .

This result is usually called Lévy's zero–one law or Levy's upwards theorem. The reason for the name is that if {displaystyle A} is an event in {displaystyle F_{infty }} , then the theorem says that {displaystyle mathbf {P} [Amid F_{k}]to mathbf {1} _{A}} almost surely, i.e., the limit of the probabilities is 0 or 1. In plain language, if we are learning gradually all the information that determines the outcome of an event, then we will become gradually certain what the outcome will be. This sounds almost like a tautology, but the result is still non-trivial. For instance, it easily implies Kolmogorov's zero–one law, since it says that for any tail event A, we must have {displaystyle mathbf {P} [A]=mathbf {1} _{A}} almost surely, hence {displaystyle mathbf {P} [A]in {0,1}} .

Similarly we have the Levy's downwards theorem : Let {displaystyle (Omega ,F,mathbf {P} )} be a probability space and let {displaystyle X} be a random variable in {displaystyle L^{1}} . Let {displaystyle (F_{k})_{kin mathbf {N} }} be any decreasing sequence of sub-sigma algebras of {displaystyle F} , and define {displaystyle F_{infty }} to be the intersection. Then {displaystyle operatorname {E} {big [}Xmid F_{k}{big ]}to operatorname {E} {big [}Xmid F_{infty }{big ]}{text{ as }}kto infty } both {displaystyle mathbf {P} } -almost surely and in {displaystyle L^{1}} .

See also Backwards martingale convergence theorem[6] This article includes a list of general references, but it lacks sufficient corresponding inline citations. Please help to improve this article by introducing more precise citations. (January 2012) (Learn how and when to remove this template message) References ^ Doob, J. L. (1953). Stochastic Processes. New York: Wiley. ^ Durrett, Rick (1996). Probability: theory and examples (Second ed.). Duxbury Press. ISBN 978-0-534-24318-0.; Durrett, Rick (2010). 4th edition. ISBN 9781139491136. ^ Jump up to: a b "Martingale Convergence Theorem" (PDF). Massachusetts Institute of Tecnnology, 6.265/15.070J Lecture 11-Additional Material, Advanced Stochastic Processes, Fall 2013, 10/9/2013. ^ Bobrowski, Adam (2005). Functional Analysis for Probability and Stochastic Processes: An Introduction. Cambridge University Press. pp. 113–114. ISBN 9781139443883. ^ Gushchin, A. A. (2014). "On pathwise counterparts of Doob's maximal inequalities". Proceedings of the Steklov Institute of Mathematics. 287 (287): 118–121. arXiv:1410.8264. doi:10.1134/S0081543814080070. S2CID 119150374. ^ Doob, Joseph L. (1994). Measure theory. Graduate Texts in Mathematics, Vol. 143. Springer. p. 197. ISBN 9781461208778. Øksendal, Bernt K. (2003). Stochastic Differential Equations: An Introduction with Applications (Sixth ed.). Berlin: Springer. ISBN 3-540-04758-1. (See Appendix C) Categories: Probability theoremsMartingale theory

Si quieres conocer otros artículos parecidos a Doob's martingale convergence theorems puedes visitar la categoría Martingale theory.

Deja una respuesta

Tu dirección de correo electrónico no será publicada.


Utilizamos cookies propias y de terceros para mejorar la experiencia de usuario Más información