Analytics, Journals

Journal: Estimator and Sampling

Estimator’s Bias vs. Sampling Variability 

Bias of an estimator refers to possibility an estimator over- or underestimate the parameter while sampling variability refers to how much the estimate varies from sample to sample.

Bias

A statistic is biased if the mean of the sampling distribution of the statistic is not equal to the parameter.

The bias of an estimator is

Capture

The population mean µ, the mean of the sampling distribution of the sample mean, is an unbiased estimate. Any sample mean may underestimate or overestimate µ, but there is no systematic tendency for sample means to either under or overestimate µ.

Variance in a population is

Capture

Variance in a sample is

Capture

Noted that if we use N in the formula for sample variance, the estimates tend to be too low and therefore biased. The formula with N – 1 in the denominator gives an unbiased estimate of the population variance.

Sampling Variability

The sampling variability of a statistic refers to how much the statistic varies from sample to sample and is measured by its standard error. The smaller the standard error, the less the sampling variability.

The standard error of the mean is

Capture

The larger the sample size N, the smaller the standard error of the mean and therefore, the lower the sampling variability.

The smaller the standard error of a statistic, the more efficient the statistic.

Mean squared error (MSE)

Another important characteristic of an estimator is the mean squared error (MSE). MSE captures the error that the estimator makes over its distribution and thus capture the average performance if there are many repeated samplings.

The MSE of an estimator is

Capture

It is noted that the MSE decomposes into a sum of the bias and variance of the estimator, both measures are important and need to be minimize to achieve optimal estimator performance. It is common to trade-off some increase in bias for a larger decrease in the variance and vice-verse.

Capture

Source: http://scott.fortmann-roe.com/docs/BiasVariance.html

Bootstrap

Bootstrap is extremely useful for analysts to perform any estimation. Bootstrap refers to test that relies on random sampling with replacement. To bootstrap, analysts can assign measures of accuracy (e.g. bias, variance, confidence interval, etc.) to sample estimates. In other words, bootstrap generates samples of the estimator in interest, using the parameter that we calculated.

Essentially there are two types of bootstrap: parametric and non-parametric. In parametric bootstrap, analysts obtain an ‘estimate’ of the parameter from the original sample, then use the statistical distribution to generate many datasets. In each dataset, analysts will compute the parameters to form a sampling distribution.

Sampling Distribution of the Mean

The sample mean, or mean of the random variable in the sample, has a sampling distribution which for large sample sizes we can use the Central Limit Theorem (CLT) to say this distribution is normal distribution

Capture

Sampling Distribution of the Variance

How well does the sample variance estimate the true variance? In the asymptotic limit of very large number of replicates (i.e. number of sample variances), it is found that:

Capture

We can see that the expected value of the sample variance is less than the actual variance. However, with larger value of N, the expected value of sample variance is very close to the real measure.

Sampling Errors

Stochastic Noise

Also known as deterministic noise, stochastic noise can be random fluctuation or measurement errors in the data which are not modeled. This adds more complexity to the models that may not be appropriately analysed and result in inaccurate analysis.

Systematic Error

Systematic error refers to those that may affect the collective quality of the sample data. For example, sampling of respondents for an election might be done poorly if you were to only call people with land-lines (your respondents would likely skew older). This may result in a sampling bias in your polls.

Generically stochastic noise and systematic errors cannot be modeled statistically and need to be dealt with on a case by case basis. It is important not to make the model fit to the noise. This problem is called over-fitting where the model fits the in-sample data well but with out-of-sample data points, it performs poorly.

Advertisements
Standard