Consistent estimator
Encyclopedia
In statistics
Statistics
Statistics is the study of the collection, organization, analysis, and interpretation of data. It deals with all aspects of this, including the planning of data collection in terms of the design of surveys and experiments....

, a sequence of estimator
Estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule and its result are distinguished....

s for parameter θ0 is said to be consistent (or asymptotically consistent) if this sequence converges in probability to θ0. It means that the distributions of the estimators become more and more concentrated near the true value of the parameter being estimated, so that the probability of the estimator being arbitrarily close to θ0 converges to one.

In practice one usually constructs a single estimator as a function of an available sample of size
Sample size
Sample size determination is the act of choosing the number of observations to include in a statistical sample. The sample size is an important feature of any empirical study in which the goal is to make inferences about a population from a sample...

 n, and then imagines being able to keep collecting data and expanding the sample ad infinitum. In this way one would obtain a sequence of estimators indexed by n and the notion of consistency will be understood as the sample size “grows to infinity”. If this sequence converges in probability to the true value θ0, we call it a consistent estimator; otherwise the estimator is said to be inconsistent.

The consistency as defined here is sometimes referred to as the weak consistency. When we replace the convergence in probability with the almost sure convergence, then the sequence of estimators is said to be strongly consistent.

Definition

Loosely speaking, an estimator
Estimator
In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule and its result are distinguished....

 Tn of parameter θ is said to be consistent, if it converges in probability to the true value of the parameter:


A more rigorous definition takes into account the fact that θ is actually unknown, and thus the convergence in probability must take place for every possible value of this parameter. Suppose } is a family of distributions (the parametric model
Parametric model
In statistics, a parametric model or parametric family or finite-dimensional model is a family of distributions that can be described using a finite number of parameters...

), and } is an infinite sample from the distribution pθ. Let { Tn(Xθ) } be a sequence of estimators for some parameter g(θ). Usually Tn will be based on the first n observations of a sample. Then this sequence {Tn} is said to be (weakly) consistent if


This definition uses g(θ) instead of simply θ, because often one is interested in estimating a certain function or a sub-vector of the underlying parameter. In the next example we estimate the location parameter of the model, but not the scale:

Example: sample mean for normal random variables

Suppose one has a sequence of observations {X1, X2, …} from a normal N(μ, σ2) distribution. To estimate μ based on the first n observations, we use the sample mean: Tn = (X1 + … + Xn)/n. This defines a sequence of estimators, indexed by the sample size n.

From the properties of the normal distribution, we know that Tn is itself normally distributed, with mean μ and variance σ2/n. Equivalently, \scriptstyle (T_n-\mu)/(\sigma/\sqrt{n}) has a standard normal distribution. Then

as n tends to infinity, for any fixed . Therefore, the sequence Tn of sample means is consistent for the population mean μ.

Establishing consistency

The notion of asymptotic consistency is very close, almost synonymous to the notion of convergence in probability. As such, any theorem, lemma, or property which establishes convergence in probability may be used to prove the consistency. Many such tools exist:
  • In order to demonstrate consistency directly from the definition one can use the inequality

the most common choice for function h being either the absolute value (in which case it is known as Markov inequality), or the quadratic function (respectively Chebychev's inequality).
  • Another useful result is the continuous mapping theorem
    Continuous mapping theorem
    In probability theory, the continuous mapping theorem states that continuous functions are limit-preserving even if their arguments are sequences of random variables. A continuous function, in Heine’s definition, is such a function that maps convergent sequences into convergent sequences: if xn → x...

    : if Tn is consistent for θ and g(·) is a real-valued function continuous at point θ, then g(Tn) will be consistent for g(θ):

  • Slutsky’s theorem can be used to combine several different estimators, or an estimator with a non-random covergent sequence. If Tn →pα, and Sn →pβ, then

  • If estimator Tn is given by an explicit formula, then most likely the formula will employ sums of random variables, and then the law of large numbers
    Law of large numbers
    In probability theory, the law of large numbers is a theorem that describes the result of performing the same experiment a large number of times...

     can be used: for a sequence {Xn} of random variables and under suitable conditions,

  • If estimator Tn is defined implicitly, for example as a value that maximizes certain objective function (see extremum estimator
    Extremum estimator
    In statistics and econometrics, extremum estimators is a wide class of estimators for parametric models that are calculated through maximization of a certain objective function, which depends on the data...

    ), then a more complicated argument involving stochastic equicontinuity
    Stochastic equicontinuity
    In estimation theory in statistics, stochastic equicontinuity is a property of estimators or of estimation procedures that is useful in dealing with their asymptotic behviour as the amount of data increases. It is a version of equicontinuity used in the context of functions of random variables:...

     has to be used.

Unbiased but not consistent

An estimator can be unbiased but not consistent. For example, for an iid sample {x,..., x} one can use T(X) = x as the estimator of the mean E[x]. This estimator is obviously unbiased, and obviously inconsistent.

Biased but consistent

Alternatively, an estimator can be biased but consistent. For example if the mean is estimated by it is biased, but as , it approaches the correct value, and so it is consistent.

See also

  • Fisher consistency
    Fisher consistency
    In statistics, Fisher consistency, named after Ronald Fisher, is a desirable property of an estimator asserting that if the estimator were calculated using the entire population rather than a sample, the true value of the estimated parameter would be obtained...

     — alternative, although rarely used concept of consistency for the estimators
  • Consistent test
    Statistical hypothesis testing
    A statistical hypothesis test is a method of making decisions using data, whether from a controlled experiment or an observational study . In statistics, a result is called statistically significant if it is unlikely to have occurred by chance alone, according to a pre-determined threshold...

    — the notion of consistency in the context of hypothesis testing
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK