Lehmann–Scheffé theorem

Lehmann–Scheffé theorem This article needs additional citations for verification. Bitte helfen Sie mit, diesen Artikel zu verbessern, indem Sie zuverlässige Quellen zitieren. Nicht bezogenes Material kann angefochten und entfernt werden. Quellen finden: "Lehmann–Scheffé theorem" – news · newspapers · books · scholar · JSTOR (April 2011) (Erfahren Sie, wie und wann Sie diese Vorlagennachricht entfernen können) In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation.[1] The theorem states that any estimator which is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers.[2][3] If T is a complete sufficient statistic for θ and E(g(T)) = τ(ich) then g(T) is the uniformly minimum-variance unbiased estimator (UMVUE) of τ(ich).

Inhalt 1 Aussage 1.1 Nachweisen 2 Example for when using a non-complete minimal sufficient statistic 3 Siehe auch 4 Referenzen Statement Let {Anzeigestil {vec {X}}=X_{1},X_{2},Punkte ,X_{n}} be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) {Anzeigestil f(x:Theta )} wo {displaystyle theta in Omega } is a parameter in the parameter space. Vermuten {displaystyle Y=u({vec {X}})} is a sufficient statistic for θ, und lass {Anzeigestil {f_{Y}(j:Theta ):theta in Omega }} be a complete family. Wenn {Anzeigestil Varphi :Name des Bedieners {E} [Varphi (Y)]=theta } dann {Anzeigestil Varphi (Y)} is the unique MVUE of θ.

Proof By the Rao–Blackwell theorem, wenn {Anzeigestil Z} is an unbiased estimator of θ then {Anzeigestil Varphi (Y):= Betreibername {E} [Zmid Y]} defines an unbiased estimator of θ with the property that its variance is not greater than that of {Anzeigestil Z} .

Now we show that this function is unique. Vermuten {Anzeigestil W.} is another candidate MVUE estimator of θ. Then again {Anzeigestil psi (Y):= Betreibername {E} [Wmid Y]} defines an unbiased estimator of θ with the property that its variance is not greater than that of {Anzeigestil W.} . Dann {Anzeigestil Betreibername {E} [Varphi (Y)-psi (Y)]=0,theta in Omega .} Seit {Anzeigestil {f_{Y}(j:Theta ):theta in Omega }} is a complete family {Anzeigestil Betreibername {E} [Varphi (Y)-psi (Y)]=0implies varphi (j)-psi (j)=0,theta in Omega } and therefore the function {Anzeigestil Varphi } is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude that {Anzeigestil Varphi (Y)} is the MVUE.

Example for when using a non-complete minimal sufficient statistic An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016.[4] Lassen {Anzeigestil X_{1},Punkte ,X_{n}} be a random sample from a scale-uniform distribution {displaystyle Xsim U((1-k)Theta ,(1+k)Theta ),} with unknown mean {Anzeigestil Betreibername {E} [X]=theta } and known design parameter {Displaystyle-Verwandtschaft (0,1)} . In the search for "best" possible unbiased estimators for {Theta im Display-Stil } , it is natural to consider {Anzeigestil X_{1}} as an initial (crude) unbiased estimator for {Theta im Display-Stil } and then try to improve it. Seit {Anzeigestil X_{1}} is not a function of {displaystyle T=left(X_{(1)},X_{(n)}Rechts)} , the minimal sufficient statistic for {Theta im Display-Stil } (wo {Anzeigestil X_{(1)}=min _{ich}X_{ich}} und {Anzeigestil X_{(n)}= max _{ich}X_{ich}} ), it may be improved using the Rao–Blackwell theorem as follows: {Anzeigestil {Hut {Theta }}_{RB}= Betreibername {E} _{Theta }[X_{1}mid X_{(1)},X_{(n)}]={frac {X_{(1)}+X_{(n)}}{2}}.} Jedoch, the following unbiased estimator can be shown to have lower variance: {Anzeigestil {Hut {Theta }}_{LV}={frac {1}{k^{2}{frac {n-1}{n+1}}+1}}cdot {frac {(1-k)X_{(1)}+(1+k)X_{(n)}}{2}}.} And in fact, it could be even further improved when using the following estimator: {Anzeigestil {Hut {Theta }}_{Text{BAYES}}={frac {n+1}{n}}links[1-{frac {{frac {X_{(1)}(1+k)}{X_{(n)}(1-k)}}-1}{links({frac {X_{(1)}(1+k)}{X_{(n)}(1-k)}}Rechts)^{n+1}-1}}Rechts]{frac {X_{(n)}}{1+k}}} The model is a scale model. Optimal equivariant estimators can then be derived for loss functions that are invariant.[5] See also Basu's theorem Complete class theorem Rao–Blackwell theorem References ^ Casella, George (2001). Statistical Inference. Duxbury Press. p. 369. ISBN 978-0-534-24312-8. ^ Lehmann, E. L.; Scheffé, H. (1950). "Completeness, similar regions, and unbiased estimation. ICH." Sankhyā. 10 (4): 305–340. doi:10.1007/978-1-4614-1412-4_23. JSTOR 25048038. HERR 0039201. ^ Lehmann, E.L.; Scheffé, H. (1955). "Completeness, similar regions, and unbiased estimation. II". Sankhyā. 15 (3): 219–236. doi:10.1007/978-1-4614-1412-4_24. JSTOR 25048243. HERR 0072410. ^ Tal Galili & Isaac Meilijson (31 Mar 2016). "An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator". The American Statistician. 70 (1): 108–113. doi:10.1080/00031305.2015.1100683. PMC 4960505. PMID 27499547. ^ Taraldsen, Gunnar (2020). "Micha Mandel (2020), "The Scaled Uniform Model Revisited," The American Statistician, 74:1, 98–100: Kommentar". The American Statistician. 74 (3): 315–315. doi:10.1080/00031305.2020.1769727. hide vte Statistics OutlineIndex show Descriptive statistics show Data collection hide Statistical inference Statistical theory PopulationStatisticProbability distributionSampling distribution Order statisticEmpirical distribution Density estimationStatistical model Model specificationLp spaceParameter locationscaleshapeParametric family Likelihood (monoton)Location–scale familyExponential familyCompletenessSufficiencyStatistical functional BootstrapUVOptimal decision loss functionEfficiencyStatistical distance divergenceAsymptoticsRobustness Frequentist inference Point estimation Estimating equations Maximum likelihoodMethod of momentsM-estimatorMinimum distanceUnbiased estimators Mean-unbiased minimum-variance Rao–BlackwellizationLehmann–Scheffé theoremMedian unbiasedPlug-in Interval estimation Confidence intervalPivotLikelihood intervalPrediction intervalTolerance intervalResampling BootstrapJackknife Testing hypotheses 1- & 2-tailsPower Uniformly most powerful testPermutation test Randomization testMultiple comparisons Parametric tests Likelihood-ratioScore/Lagrange multiplierWald Specific tests Z-test (normal)Student's t-testF-test Goodness of fit Chi-squaredG-testKolmogorov–SmirnovAnderson–DarlingLillieforsJarque–BeraNormality (Shapiro–Wilk)Likelihood-ratio testModel selection Cross validationAICBIC Rank statistics Sign Sample medianSigned rank (Wilcoxon) Hodges–Lehmann estimatorRank sum (Mann–Whitney)Nonparametric anova 1-way (Kruskal–Wallis)2-Weg (Friedmann)Ordered alternative (Jonckheere–Terpstra)Van der Waerden test Bayesian inference Bayesian probability priorposteriorCredible intervalBayes factorBayesian estimator Maximum posterior estimator show CorrelationRegression analysis show Categorical / Multivariate / Time-series / Survival analysis show Applications Category Mathematics portalCommons WikiProject Categories: Theorems in statisticsEstimation theory

Wenn Sie andere ähnliche Artikel wissen möchten Lehmann–Scheffé theorem Sie können die Kategorie besuchen Estimation theory.

Hinterlasse eine Antwort

Deine Email-Adresse wird nicht veröffentlicht.

Geh hinauf

Wir verwenden eigene Cookies und Cookies von Drittanbietern, um die Benutzererfahrung zu verbessern Mehr Informationen