document.write( "Question 535907: In hypothesis testing, as the α level gets smaller, the level of confidence increases. T or F, and why or why not? \n" ); document.write( "
Algebra.Com's Answer #352021 by oberobic(2304)\"\" \"About 
You can put this solution on YOUR website!
When you set the α level, you are setting the level of random chance that you are willing to accept.
\n" ); document.write( ".
\n" ); document.write( "If you set α=.10, then you know that your results could be by chance 1/10 or 10% of the time.
\n" ); document.write( ".
\n" ); document.write( "If you set α=.05, then you know that your results could be by chance 1/20 or 5% of the time.
\n" ); document.write( ".
\n" ); document.write( "If you set α=.01, then you know that your results could be by chance 1/100 of 1% of the time.
\n" ); document.write( ".
\n" ); document.write( "So, it is reasonable to say that your level of confidence in whether the obtained result is \"real\" or simply by chance increases as α is set lower and lower.
\n" ); document.write( ".
\n" ); document.write( "Note that α itself does not \"get smaller.\" As the statistician, you set the α level that you want to use. For an initial exploratory analysis you might use α=.10. For a highly stringent test, you might use α=.01. Using α=.05 is convention and strikes a balance between being too aggressive or too conservative.
\n" ); document.write( ".
\n" ); document.write( "And you should remember that as you set α lower and lower, there is a complementary effect, perhaps better called a side-effect. The likelihood that you reject a real difference goes up.
\n" ); document.write( ".
\n" ); document.write( "Read more in your textbook or online regarding Type I and Type II errors. See also β and 1-β (the Power of a test).
\n" ); document.write( ".
\n" ); document.write( "Done.
\n" ); document.write( "
\n" );