Lehmann–Scheffé theorem

Lehmann–Scheffé theorem This article needs additional citations for verification. Aiutaci a migliorare questo articolo aggiungendo citazioni a fonti affidabili. Il materiale non fornito può essere contestato e rimosso. Trova fonti: "Lehmann–Scheffé theorem" – news · newspapers · books · scholar · JSTOR (aprile 2011) (Scopri come e quando rimuovere questo messaggio modello) In statistics, the Lehmann–Scheffé theorem is a prominent statement, tying together the ideas of completeness, sufficiency, uniqueness, and best unbiased estimation.[1] The theorem states that any estimator which is unbiased for a given unknown quantity and that depends on the data only through a complete, sufficient statistic is the unique best unbiased estimator of that quantity. The Lehmann–Scheffé theorem is named after Erich Leo Lehmann and Henry Scheffé, given their two early papers.[2][3] If T is a complete sufficient statistic for θ and E(g(T)) = τ(io) then g(T) is the uniformly minimum-variance unbiased estimator (UMVUE) of τ(io).

Contenuti 1 Dichiarazione 1.1 Prova 2 Example for when using a non-complete minimal sufficient statistic 3 Guarda anche 4 Riferimenti Dichiarazione Let {stile di visualizzazione {vec {X}}=X_{1},X_{2},punti ,X_{n}} be a random sample from a distribution that has p.d.f (or p.m.f in the discrete case) {stile di visualizzazione f(X:teta )} dove {displaystyle theta in Omega } is a parameter in the parameter space. Supponiamo {displaystyle Y=u({vec {X}})} is a sufficient statistic for θ, e lascia {stile di visualizzazione {f_{Y}(y:teta ):theta in Omega }} be a complete family. Se {stile di visualizzazione varphi :nome operatore {e} [varfi (Y)]=theta } poi {stile di visualizzazione varphi (Y)} is the unique MVUE of θ.

Proof By the Rao–Blackwell theorem, Se {stile di visualizzazione Z} is an unbiased estimator of θ then {stile di visualizzazione varphi (Y):=nome operatore {e} [Zmid Y]} defines an unbiased estimator of θ with the property that its variance is not greater than that of {stile di visualizzazione Z} .

Now we show that this function is unique. Supponiamo {stile di visualizzazione W.} is another candidate MVUE estimator of θ. Then again {stile di visualizzazione psi (Y):=nome operatore {e} [Wmid Y]} defines an unbiased estimator of θ with the property that its variance is not greater than that of {stile di visualizzazione W.} . Quindi {nome dell'operatore dello stile di visualizzazione {e} [varfi (Y)-psi (Y)]=0,theta in Omega .} Da {stile di visualizzazione {f_{Y}(y:teta ):theta in Omega }} is a complete family {nome dell'operatore dello stile di visualizzazione {e} [varfi (Y)-psi (Y)]=0implies varphi (y)-psi (y)=0,theta in Omega } and therefore the function {stile di visualizzazione varphi } is the unique function of Y with variance not greater than that of any other unbiased estimator. We conclude that {stile di visualizzazione varphi (Y)} is the MVUE.

Example for when using a non-complete minimal sufficient statistic An example of an improvable Rao–Blackwell improvement, when using a minimal sufficient statistic that is not complete, was provided by Galili and Meilijson in 2016.[4] Permettere {stile di visualizzazione X_{1},ldot ,X_{n}} be a random sample from a scale-uniform distribution {displaystyle Xsim U((1-K)teta ,(1+K)teta ),} with unknown mean {nome dell'operatore dello stile di visualizzazione {e} [X]=theta } and known design parameter {parente dello stile di visualizzazione (0,1)} . In the search for "best" possible unbiased estimators for {stile di visualizzazione theta } , it is natural to consider {stile di visualizzazione X_{1}} as an initial (crude) unbiased estimator for {stile di visualizzazione theta } and then try to improve it. Da {stile di visualizzazione X_{1}} is not a function of {displaystyle T=left(X_{(1)},X_{(n)}Giusto)} , the minimal sufficient statistic for {stile di visualizzazione theta } (dove {stile di visualizzazione X_{(1)}=min_{io}X_{io}} e {stile di visualizzazione X_{(n)}=massimo _{io}X_{io}} ), it may be improved using the Rao–Blackwell theorem as follows: {stile di visualizzazione {cappello {teta }}_{RB}=nome operatore {e} _{teta }[X_{1}mid X_{(1)},X_{(n)}]={frac {X_{(1)}+X_{(n)}}{2}}.} Tuttavia, the following unbiased estimator can be shown to have lower variance: {stile di visualizzazione {cappello {teta }}_{LV}={frac {1}{k^{2}{frac {n-1}{n+1}}+1}}cdot {frac {(1-K)X_{(1)}+(1+K)X_{(n)}}{2}}.} And in fact, it could be even further improved when using the following estimator: {stile di visualizzazione {cappello {teta }}_{testo{BAYES}}={frac {n+1}{n}}sinistra[1-{frac {{frac {X_{(1)}(1+K)}{X_{(n)}(1-K)}}-1}{sinistra({frac {X_{(1)}(1+K)}{X_{(n)}(1-K)}}Giusto)^{n+1}-1}}Giusto]{frac {X_{(n)}}{1+K}}} The model is a scale model. Optimal equivariant estimators can then be derived for loss functions that are invariant.[5] See also Basu's theorem Complete class theorem Rao–Blackwell theorem References ^ Casella, Giorgio (2001). Statistical Inference. Duxbury Press. p. 369. ISBN 978-0-534-24312-8. ^ Lehmann, e. l.; Scheffé, H. (1950). "Completeness, similar regions, and unbiased estimation. IO." Sankhyā. 10 (4): 305–340. doi:10.1007/978-1-4614-1412-4_23. JSTOR 25048038. SIG 0039201. ^ Lehmann, E.L.; Scheffé, H. (1955). "Completeness, similar regions, and unbiased estimation. II". Sankhyā. 15 (3): 219–236. doi:10.1007/978-1-4614-1412-4_24. JSTOR 25048243. SIG 0072410. ^ Tal Galili & Isaac Meilijson (31 Mar 2016). "An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator". Lo statistico americano. 70 (1): 108–113. doi:10.1080/00031305.2015.1100683. PMC 4960505. PMID 27499547. ^ Taraldsen, Gunnar (2020). "Micha Mandel (2020), "The Scaled Uniform Model Revisited," Lo statistico americano, 74:1, 98–100: Commento". Lo statistico americano. 74 (3): 315–315. doi:10.1080/00031305.2020.1769727. hide vte Statistics OutlineIndex show Descriptive statistics show Data collection hide Statistical inference Statistical theory PopulationStatisticProbability distributionSampling distribution Order statisticEmpirical distribution Density estimationStatistical model Model specificationLp spaceParameter locationscaleshapeParametric family Likelihood (monotone)Location–scale familyExponential familyCompletenessSufficiencyStatistical functional BootstrapUVOptimal decision loss functionEfficiencyStatistical distance divergenceAsymptoticsRobustness Frequentist inference Point estimation Estimating equations Maximum likelihoodMethod of momentsM-estimatorMinimum distanceUnbiased estimators Mean-unbiased minimum-variance Rao–BlackwellizationLehmann–Scheffé theoremMedian unbiasedPlug-in Interval estimation Confidence intervalPivotLikelihood intervalPrediction intervalTolerance intervalResampling BootstrapJackknife Testing hypotheses 1- & 2-tailsPower Uniformly most powerful testPermutation test Randomization testMultiple comparisons Parametric tests Likelihood-ratioScore/Lagrange multiplierWald Specific tests Z-test (normal)Student's t-testF-test Goodness of fit Chi-squaredG-testKolmogorov–SmirnovAnderson–DarlingLillieforsJarque–BeraNormality (Shapiro–Wilk)Likelihood-ratio testModel selection Cross validationAICBIC Rank statistics Sign Sample medianSigned rank (Wilcoxon) Hodges–Lehmann estimatorRank sum (Mann–Whitney)Nonparametric anova 1-way (Kruskal–Wallis)2-modo (Friedman)Ordered alternative (Jonckheere–Terpstra)Van der Waerden test Bayesian inference Bayesian probability priorposteriorCredible intervalBayes factorBayesian estimator Maximum posterior estimator show CorrelationRegression analysis show Categorical / Multivariate / Time-series / Survival analysis show Applications Category Mathematics portalCommons WikiProject Categories: Theorems in statisticsEstimation theory

Se vuoi conoscere altri articoli simili a Lehmann–Scheffé theorem puoi visitare la categoria Estimation theory.

lascia un commento

L'indirizzo email non verrà pubblicato.

Vai su

Utilizziamo cookie propri e di terze parti per migliorare l'esperienza dell'utente Maggiori informazioni