# Slutsky's theorem

Slutsky's theorem In probability theory, Slutsky's theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.[1] The theorem was named after Eugen Slutsky.[2] Slutsky's theorem is also attributed to Harald Cramér.[3] Inhalt 1 Aussage 2 Nachweisen 3 Siehe auch 4 Verweise 5 Further reading Statement Let {Anzeigestil X_{n},Y_{n}} be sequences of scalar/vector/matrix random elements. Wenn {Anzeigestil X_{n}} converges in distribution to a random element {Anzeigestil X} und {Anzeigestil Y_{n}} converges in probability to a constant {Anzeigestil c} , dann {Anzeigestil X_{n}+Y_{n} {xRechtspfeil {d}} X+c;} {Anzeigestil X_{n}Y_{n} xRechtspfeil {d} Xc;} {Anzeigestil X_{n}/Y_{n} {xRechtspfeil {d}} X/c,} provided that c is invertible, wo {Anzeigestil {xRechtspfeil {d}}} denotes convergence in distribution.

Anmerkungen: The requirement that Yn converges to a constant is important — if it were to converge to a non-degenerate random variable, the theorem would be no longer valid. Zum Beispiel, Lassen {Anzeigestil X_{n}sim {rm {Uniform}}(0,1)} und {Anzeigestil Y_{n}=-X_{n}} . The sum {Anzeigestil X_{n}+Y_{n}=0} for all values of n. Darüber hinaus, {Anzeigestil Y_{n},xRechtspfeil {d} ,{rm {Uniform}}(-1,0)} , aber {Anzeigestil X_{n}+Y_{n}} does not converge in distribution to {displaystyle X+Y} , wo {displaystyle Xsim {rm {Uniform}}(0,1)} , {displaystyle Ysim {rm {Uniform}}(-1,0)} , und {Anzeigestil X} und {Anzeigestil Y} are independent.[4] The theorem remains valid if we replace all convergences in distribution with convergences in probability. Proof This theorem follows from the fact that if Xn converges in distribution to X and Yn converges in probability to a constant c, then the joint vector (Xn, Yn) converges in distribution to (X, c) (siehe hier).

Next we apply the continuous mapping theorem, recognizing the functions g(x,j) = x + j, g(x,j) = xy, und G(x,j) = x y−1 are continuous (for the last function to be continuous, y has to be invertible).

See also Convergence of random variables References ^ Goldberger, Arthur S. (1964). Econometric Theory. New York: Wiley. pp. 117–120. ^ Slutsky, E. (1925). "Über stochastische Asymptoten und Grenzwerte". Metron (auf Deutsch). 5 (3): 3–89. JFM 51.0380.03. ^ Slutsky's theorem is also called Cramér's theorem according to Remark 11.1 (Seite 249) of Gut, Allan (2005). Wahrscheinlichkeit: a graduate course. Springer-Verlag. ISBN 0-387-22833-0. ^ See Zeng, Donglin (Herbst 2018). "Large Sample Theory of Random Variables (lecture slides)" (Pdf). Advanced Probability and Statistical Inference I (BIOS 760). University of North Carolina at Chapel Hill. Slide 59. Further reading Casella, George; Berger, Roger L. (2001). Statistical Inference. Pacific Grove: Duxbury. pp. 240–245. ISBN 0-534-24312-6. Grimmett, G.; Stirzaker, D. (2001). Probability and Random Processes (3Dr. Ed.). Oxford. Hayashi, Fumio (2000). Econometrics. Princeton University Press. pp. 92–93. ISBN 0-691-01018-8. Kategorien: Asymptotic theory (statistics)WahrscheinlichkeitssätzeTheoreme in der Statistik

Wenn Sie andere ähnliche Artikel wissen möchten Slutsky's theorem Sie können die Kategorie besuchen Asymptotic theory (statistics).

Geh hinauf

Wir verwenden eigene Cookies und Cookies von Drittanbietern, um die Benutzererfahrung zu verbessern Mehr Informationen