SOLUTION: when we are using S.E instead of standard deviation for probability determination

Algebra ->  Probability-and-statistics -> SOLUTION: when we are using S.E instead of standard deviation for probability determination       Log On


   



Question 1019220: when we are using S.E instead of standard deviation for probability determination
Answer by Theo(13342) About Me  (Show Source):
You can put this solution on YOUR website!
standard error is used when you are dealing with the distribution of sample means.

the standard error is the standard deviation of the distribution of sample means.

this is different from the distribution of sample elements within one sample.

the difference is shown in the following example:

you have 3 samples.

each sample contains n elements.

each sample has a mean and a standard deviation.

sample 1 mean might be 20 and standard deviation is 30.
sample 2 mean might be 30 and standard deviation is 70.
sample 3 mean might be 40 and standard deviation is 50.

you create a distribution of sample means by taking the mean of each of these samples and forming a distribution from them.

your distribution of sample means might have a mean of 30 with a standard deviation of 10.

the standard deviation of the distribution of sample means will be smaller than the standard deviation of each sample.

the larger the sample size, the smaller the distribution of sample means will be.

this distribution of sample means is called the standard error of the mean.

there is a formula to estimate the size of the standard error.

it is standard error = standard deviation of the population divided by the sample size.

if you don't have the standard deviation of the population, you use the standard deviation of the sample you are working with.

that's the difference between using a z-score or a t-score.

with a z-score, you use the population standard deviation in the equation.

with a t-score, you use the sample standard deviation.

the formula is the same.

it's only a matter of where you get the standard deviation from.

the formula is se = sd / sqrt(n)

se is the standard error of the mean.
sd is the standard deviation of the population or of the sample, whichever you can get.
n is the sample size.

what this is telling you is that, the larger your sample size is, the smaller the standard error of the distribution of sample means will be.

what is the sample size in the distribution of sample means?

it is the sample size of each sample where the mean is being calculated from.

in my example above, i had 3 samples.

if each of those samples contained 100 elements, then the sample size would be 100.

iof each of those samples contained 1000 elements, then the sample size would be 1000.

the theory behind standard error assumes that each of the samples taken are of the same size.

here's one tutorial on the subject.

what happens if the sample size is 1?

the formula is se = sd / sqrt(n) which becomes se = sd / sqrt(1) which becomes se = sd.

if the sample size is 1, then the standard error or the mean is the same as the standard error of the population or the sample, whichever is used.

that's because, with a sample size of 1, each mean is the only element in the sample which means it's not a mean, but an element.

it's complicated and confusing, but, in general:

if you are dealing with a distribution of elements, then use standard deviation.

if you are dealing with a distribution of sample means, then use standard error.