Lehmann–Scheffé theorem

Lehmann–Scheffé theorem This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. Find sources: "Lehmann–Scheffé theorem" – news · newspapers · books · scholar · JSTOR (April 2011) (Learn how and when to remove this template message) In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation.[1] The theorem states that any estimator which is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers.[2][3] If T is a complete sufficient statistic for θ and E(g(T)) = τ(θ) then g(T) is the uniformly minimum-variance unbiased estimator (UMVUE) of τ(θ).

Contents 1 Statement 1.1 Proof 2 Example for when using a non-complete minimal sufficient statistic 3 See also 4 References Statement Let {displaystyle {vec {X}}=X_{1},X_{2},dots ,X_{n}} be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) {displaystyle f(x:theta )} where {displaystyle theta in Omega } is a parameter in the parameter space. Suppose {displaystyle Y=u({vec {X}})} is a sufficient statistic for θ, and let {displaystyle {f_{Y}(y:theta ):theta in Omega }} be a complete family. If {displaystyle varphi :operatorname {E} [varphi (Y)]=theta } then {displaystyle varphi (Y)} is the unique MVUE of θ.

Proof By the Rao–Blackwell theorem, if {displaystyle Z} is an unbiased estimator of θ then {displaystyle varphi (Y):=operatorname {E} [Zmid Y]} defines an unbiased estimator of θ with the property that its variance is not greater than that of {displaystyle Z} .

Now we show that this function is unique. Suppose {displaystyle W} is another candidate MVUE estimator of θ. Then again {displaystyle psi (Y):=operatorname {E} [Wmid Y]} defines an unbiased estimator of θ with the property that its variance is not greater than that of {displaystyle W} . Then {displaystyle operatorname {E} [varphi (Y)-psi (Y)]=0,theta in Omega .} Since {displaystyle {f_{Y}(y:theta ):theta in Omega }} is a complete family {displaystyle operatorname {E} [varphi (Y)-psi (Y)]=0implies varphi (y)-psi (y)=0,theta in Omega } and therefore the function {displaystyle varphi } is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude that {displaystyle varphi (Y)} is the MVUE.

Example for when using a non-complete minimal sufficient statistic An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016.[4] Let {displaystyle X_{1},ldots ,X_{n}} be a random sample from a scale-uniform distribution {displaystyle Xsim U((1-k)theta ,(1+k)theta ),} with unknown mean {displaystyle operatorname {E} [X]=theta } and known design parameter {displaystyle kin (0,1)} . In the search for "best" possible unbiased estimators for {displaystyle theta } , it is natural to consider {displaystyle X_{1}} as an initial (crude) unbiased estimator for {displaystyle theta } and then try to improve it. Since {displaystyle X_{1}} is not a function of {displaystyle T=left(X_{(1)},X_{(n)}right)} , the minimal sufficient statistic for {displaystyle theta } (where {displaystyle X_{(1)}=min _{i}X_{i}} and {displaystyle X_{(n)}=max _{i}X_{i}} ), it may be improved using the Rao–Blackwell theorem as follows: {displaystyle {hat {theta }}_{RB}=operatorname {E} _{theta }[X_{1}mid X_{(1)},X_{(n)}]={frac {X_{(1)}+X_{(n)}}{2}}.} However, the following unbiased estimator can be shown to have lower variance: {displaystyle {hat {theta }}_{LV}={frac {1}{k^{2}{frac {n-1}{n+1}}+1}}cdot {frac {(1-k)X_{(1)}+(1+k)X_{(n)}}{2}}.} And in fact, it could be even further improved when using the following estimator: {displaystyle {hat {theta }}_{text{BAYES}}={frac {n+1}{n}}left[1-{frac {{frac {X_{(1)}(1+k)}{X_{(n)}(1-k)}}-1}{left({frac {X_{(1)}(1+k)}{X_{(n)}(1-k)}}right)^{n+1}-1}}right]{frac {X_{(n)}}{1+k}}} The model is a scale model. Optimal equivariant estimators can then be derived for loss functions that are invariant.[5] See also Basu's theorem Complete class theorem Rao–Blackwell theorem References ^ Casella, George (2001). Statistical Inference. Duxbury Press. p. 369. ISBN 978-0-534-24312-8. ^ Lehmann, E. L.; Scheffé, H. (1950). "Completeness, similar regions, and unbiased estimation. I." Sankhyā. 10 (4): 305–340. doi:10.1007/978-1-4614-1412-4_23. JSTOR 25048038. MR 0039201. ^ Lehmann, E.L.; Scheffé, H. (1955). "Completeness, similar regions, and unbiased estimation. II". Sankhyā. 15 (3): 219–236. doi:10.1007/978-1-4614-1412-4_24. JSTOR 25048243. MR 0072410. ^ Tal Galili & Isaac Meilijson (31 Mar 2016). "An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator". The American Statistician. 70 (1): 108–113. doi:10.1080/00031305.2015.1100683. PMC 4960505. PMID 27499547. ^ Taraldsen, Gunnar (2020). "Micha Mandel (2020), "The Scaled Uniform Model Revisited," The American Statistician, 74:1, 98–100: Comment". The American Statistician. 74 (3): 315–315. doi:10.1080/00031305.2020.1769727. hide vte Statistics OutlineIndex show Descriptive statistics show Data collection hide Statistical inference Statistical theory PopulationStatisticProbability distributionSampling distribution Order statisticEmpirical distribution Density estimationStatistical model Model specificationLp spaceParameter locationscaleshapeParametric family Likelihood (monotone)Location–scale familyExponential familyCompletenessSufficiencyStatistical functional BootstrapUVOptimal decision loss functionEfficiencyStatistical distance divergenceAsymptoticsRobustness Frequentist inference Point estimation Estimating equations Maximum likelihoodMethod of momentsM-estimatorMinimum distanceUnbiased estimators Mean-unbiased minimum-variance Rao–BlackwellizationLehmann–Scheffé theoremMedian unbiasedPlug-in Interval estimation Confidence intervalPivotLikelihood intervalPrediction intervalTolerance intervalResampling BootstrapJackknife Testing hypotheses 1- & 2-tailsPower Uniformly most powerful testPermutation test Randomization testMultiple comparisons Parametric tests Likelihood-ratioScore/Lagrange multiplierWald Specific tests Z-test (normal)Student's t-testF-test Goodness of fit Chi-squaredG-testKolmogorov–SmirnovAnderson–DarlingLillieforsJarque–BeraNormality (Shapiro–Wilk)Likelihood-ratio testModel selection Cross validationAICBIC Rank statistics Sign Sample medianSigned rank (Wilcoxon) Hodges–Lehmann estimatorRank sum (Mann–Whitney)Nonparametric anova 1-way (Kruskal–Wallis)2-way (Friedman)Ordered alternative (Jonckheere–Terpstra)Van der Waerden test Bayesian inference Bayesian probability priorposteriorCredible intervalBayes factorBayesian estimator Maximum posterior estimator show CorrelationRegression analysis show Categorical / Multivariate / Time-series / Survival analysis show Applications Category Mathematics portalCommons WikiProject Categories: Theorems in statisticsEstimation theory

Si quieres conocer otros artículos parecidos a Lehmann–Scheffé theorem puedes visitar la categoría Estimation theory.

Deja una respuesta

Tu dirección de correo electrónico no será publicada.

Subir

Utilizamos cookies propias y de terceros para mejorar la experiencia de usuario Más información