Basu's theorem

Basu's theorem In statistics, Basu's theorem states that any boundedly complete minimal sufficient statistic is independent of any ancillary statistic. C'est un 1955 result of Debabrata Basu.[1] It is often used in statistics as a tool to prove independence of two statistics, by first demonstrating one is complete sufficient and the other is ancillary, then appealing to the theorem.[2] An example of this is to show that the sample mean and sample variance of a normal distribution are independent statistics, which is done in the Example section below. This property (independence of sample mean and sample variance) characterizes normal distributions.
Contenu 1 Déclaration 1.1 Preuve 2 Exemple 2.1 Independence of sample mean and sample variance of a normal distribution 3 Remarques 4 References Statement Let {style d'affichage (P_{thêta };theta in Theta )} be a family of distributions on a measurable space {style d'affichage (X,{mathématique {UN}})} et {style d'affichage T,UN} measurable maps from {style d'affichage (X,{mathématique {UN}})} to some measurable space {style d'affichage (Oui,{mathématique {B}})} . (Such maps are called a statistic.) Si {style d'affichage T} is a boundedly complete sufficient statistic for {thêta de style d'affichage } , et {style d'affichage A} is ancillary to {thêta de style d'affichage } , then conditional on {thêta de style d'affichage } , {style d'affichage T} is independent of {style d'affichage A} . C'est-à-dire, {displaystyle Tperp A|thêta } .
Proof Let {style d'affichage P_{thêta }^{J}} et {style d'affichage P_{thêta }^{UN}} be the marginal distributions of {style d'affichage T} et {style d'affichage A} respectivement.
Denote by {style d'affichage A^{-1}(B)} the preimage of a set {style d'affichage B} under the map {style d'affichage A} . For any measurable set {displaystyle Bin {mathématique {B}}} Nous avons {style d'affichage P_{thêta }^{UN}(B)=P_{thêta }(Un ^{-1}(B))=int _{Oui}P_{thêta }(Un ^{-1}(B)mid T=t) P_{thêta }^{J}(dt).} La distribution {style d'affichage P_{thêta }^{UN}} does not depend on {thêta de style d'affichage } car {style d'affichage A} is ancillary. De même, {style d'affichage P_{thêta }(cdot mid T=t)} does not depend on {thêta de style d'affichage } car {style d'affichage T} is sufficient. Par conséquent {style d'affichage entier _{Oui}{gros [}P(Un ^{-1}(B)mid T=t)-P^{UN}(B){gros ]} P_{thêta }^{J}(dt)=0.} Note the integrand (the function inside the integral) is a function of {style d'affichage t} and not {thêta de style d'affichage } . Par conséquent, puisque {style d'affichage T} is boundedly complete the function {style d'affichage g(t)=P(Un ^{-1}(B)mid T=t)-P^{UN}(B)} is zero for {style d'affichage P_{thêta }^{J}} almost all values of {style d'affichage t} Et ainsi {style d'affichage P(Un ^{-1}(B)mid T=t)=P^{UN}(B)} for almost all {style d'affichage t} . Par conséquent, {style d'affichage A} is independent of {style d'affichage T} .
Example Independence of sample mean and sample variance of a normal distribution Let X1, X2, ..., Xn be independent, identically distributed normal random variables with mean μ and variance σ2.
Then with respect to the parameter μ, one can show that {style d'affichage {chapeau large {dans }}={frac {sum X_{je}}{n}},} the sample mean, is a complete and sufficient statistic – it is all the information one can derive to estimate μ, and no more – and {style d'affichage {chapeau large {sigma }}^{2}={frac {sum left(X_{je}-{bar {X}}droit)^{2}}{n-1}},} the sample variance, is an ancillary statistic – its distribution does not depend on μ.
Par conséquent, from Basu's theorem it follows that these statistics are independent conditional on {style d'affichage lui } , conditional on {displaystyle sigma ^{2}} .
This independence result can also be proven by Cochran's theorem.
Plus loin, this property (that the sample mean and sample variance of the normal distribution are independent) characterizes the normal distribution – no other distribution has this property.[3] Notes ^ Basu (1955) ^ Ghosh, Malay; Mukhopadhyay, Nitis; Sen, Pranab Kumar (2011), Sequential Estimation, Wiley Series in Probability and Statistics, volume. 904, John Wiley & Sons, p. 80, ISBN 9781118165911, The following theorem, due to Basu ... helps us in proving independence between certain types of statistics, without actually deriving the joint and marginal distributions of the statistics involved. This is a very powerful tool and it is often used ... ^ Geary, R.C. (1936). "The Distribution of "Student's" Ratio for Non-Normal Samples". Supplement to the Journal of the Royal Statistical Society. 3 (2): 178–184. est ce que je:10.2307/2983669. JFM 63.1090.03. JSTOR 2983669. Cet article comprend une liste de références générales, mais il manque suffisamment de citations en ligne correspondantes. Merci d'aider à améliorer cet article en introduisant des citations plus précises. (Décembre 2009) (Découvrez comment et quand supprimer ce modèle de message) References Basu, ré. (1955). "On Statistics Independent of a Complete Sufficient Statistic". Sankhyā. 15 (4): 377–380. JSTOR 25048259. M 0074745. Zbl 0068.13401. Mukhopadhyay, Nitis (2000). Probability and Statistical Inference. Statistics: A Series of Textbooks and Monographs. 162. Florida: CRC Press USA. ISBN 0-8247-0379-0. Boos, Dennis D.; Olivier, Jacqueline M. Hughes (Aug 1998). "Applications of Basu's Theorem". Le statisticien américain. 52 (3): 218–221. est ce que je:10.2307/2685927. JSTOR 2685927. M 1650407. Ghosh, Malay (Octobre 2002). "Basu's Theorem with Applications: A Personalistic Review". Sankhyā: The Indian Journal of Statistics, Série A. 64 (3): 509–531. JSTOR 25051412. M 1985397. show vte Statistics Categories: Theorems in statisticsIndependence (probability theory)
Si vous voulez connaître d'autres articles similaires à Basu's theorem vous pouvez visiter la catégorie Independence (probability theory).
Laisser un commentaire