SOLUTION: Describe the general rationale behind using post hoc tests (i.e., when they are used and why). One of the advantages of using an ANOVA (compared to using t-tests) is also a disadv

Algebra ->  Customizable Word Problem Solvers  -> Misc -> SOLUTION: Describe the general rationale behind using post hoc tests (i.e., when they are used and why). One of the advantages of using an ANOVA (compared to using t-tests) is also a disadv      Log On

Ad: Over 600 Algebra Word Problems at edhelper.com


   



Question 1189790: Describe the general rationale behind using post hoc tests (i.e., when they are used and why).
One of the advantages of using an ANOVA (compared to using t-tests) is also a disadvantage—using an ANOVA makes it necessary to use post hoc tests if there is a significant main effect. We use a post hoc test because there is one specific advantage in using an ANOVA. Explain why using an ANOVA naturally leads to the need to have post hoc tests (hint: consider what you are examining when you conduct a post hoc analysis).
Conducting a post hoc test is similar to conducting multiple t-tests. As a result, it would seem natural to want to bypass the ANOVA and just use repeated t-tests. Explain why this approach is not necessarily a good idea and why an ANOVA followed by a post hoc analysis is beneficial.
Describe an experimental hypothesis and explain which post hoc test you would use if you find a significant overall effect. Include in your explanation the pros and cons of each test in making your decision.

Answer by CPhill(1959) About Me  (Show Source):
You can put this solution on YOUR website!
Let's break down the rationale behind post hoc tests and their relationship to ANOVAs.
**Rationale Behind Post Hoc Tests:**
Post hoc tests are used *after* a statistically significant result is found in an Analysis of Variance (ANOVA) test. They are necessary when you have more than two groups being compared. The ANOVA tells you that *at least one* group mean is different from the others, but it *doesn't* tell you *which* specific groups are different from each other. Post hoc tests help pinpoint those specific differences.
**Why ANOVA Leads to Post Hoc Tests:**
ANOVA's advantage is that it can compare multiple groups simultaneously, unlike t-tests which are designed for only two groups. However, this advantage also necessitates post hoc tests. ANOVA tests the overall *omnibus* hypothesis that all group means are equal. If this hypothesis is rejected, it simply means there's *some* difference among the groups, but not where that difference lies. Post hoc tests then explore the specific comparisons between group means to see which ones are significantly different. Essentially, post hoc tests allow us to perform all possible pairwise comparisons while controlling for the overall Type I error rate.
**Why Not Just Use Multiple t-tests?**
If we were to simply conduct multiple t-tests to compare all possible pairs of groups, we would inflate the overall Type I error rate (the probability of falsely rejecting the null hypothesis). With each t-test, there's a chance of making a Type I error. When you conduct multiple t-tests, these chances accumulate, making it much more likely that you'll find a statistically significant difference just by chance, even if no real difference exists in the population. ANOVA, followed by post hoc tests, controls for this inflated error rate. Post hoc tests adjust the alpha level for each comparison to maintain a desired overall alpha level (usually .05).
**Example Hypothesis and Post Hoc Test Selection:**
Let's say we are investigating the effect of different types of background music on test performance. Participants study while listening to either classical music, rock music, or no music. Our dependent variable is their score on a subsequent test.
If we find a significant main effect in our ANOVA, it means that at least one type of music affects test performance differently than the others. To find out *which* music types differ, we would use a post hoc test.
**Which Post Hoc Test?**
Several post hoc tests exist, each with strengths and weaknesses:
* **Tukey's HSD (Honestly Significant Difference):** This is a popular and generally recommended test. It controls the Type I error rate well and is appropriate when group sizes are roughly equal. It's a good all-around choice.
* **Bonferroni Correction:** This method is very conservative (reduces the chance of Type I errors) but can be *too* conservative, making it harder to find real differences (increasing the chance of Type II errors). It is often used when you have a small number of planned comparisons.
* **Scheffé Test:** This test is also conservative and is very flexible. It can be used for complex comparisons, not just pairwise comparisons. However, like the Bonferroni correction, it can be less powerful.
* **Newman-Keuls:** This test has more power than Tukey's, but it doesn't control the family-wise error rate as strongly. This makes it more susceptible to Type I errors.
**Decision:**
For our music and test performance example, assuming roughly equal group sizes, **Tukey's HSD** would be a good choice. It provides a good balance between controlling the Type I error rate and maintaining statistical power. If we had a small number of specific comparisons we wanted to make (e.g., classical vs. none, rock vs. none), a Bonferroni correction might be appropriate. If the group sizes were very unequal, we might consider alternatives, but Tukey's HSD is often robust enough even with some moderate inequality.