Question 1193195
the test statistic is the difference between the person's actual weight and the ideal weight.
if the difference is positive, the person is overweight.
if the difference is negative, the person is underweight.
if the difference is zero, the person is at the ideal weight.
from the data, it appears the average person is overweight by an average of 15.3 pounds.
for this test, the population mean is assumed to be the ideal weight = 0.
sample size = 26
sample mean = 15.3
sample standard deviation = 27.5
population mean is assumed to be 0.
the standard error is equal to the sample standard deviation divided by the square root of the sample size = 27.5 / sqrt(26) = 5.393194 rounded to 6 decimal places.
the formula is t = (x - m) / s.
t is the t-score
x is he sample mean difference from the ideal.
m is the assumed population difference from the ideal.
s is the standard error.
it becomes:
t = (15.3 - 0) / 5.393194 = 2.836909 rounded to 6 decimal places.
the degrees of freedom = sample size minus 1 = 25.
area to the right of a t-score of 2.836909 with 25 degrees of freedom = .00451rounded to 6 decimal places.
area to the right of that t-score = .004451 rounded to 6 decimal places.
critical t-score with 25 degrees of freedom at 2% one-tail significance level equals 2.166587 rounded to 6 decimal places.
the critical t-score is less than the sample t-score, indicating the assumption that the average american is overweight is accepted.
this is also supported by the p-score because the test p-score = .004451 and the critical p-score = .02.
the test p-score is less than the critical p-score, indicating the results are significant as the .02 significance level.
here's what it looks like on a graph.
<img src = "http://theo.x10hosting.com/2022/041301.jpg">