Recently, I am working on a paper that based on an apprentice pr= oject I did before. I cannot remember how, but the concept of "power a= nalysis" came into my mind. It seems that power analysis is very impor= tant in statistical analysis, but ignored by many, if not most, educational= researchers. "[I]t is extremely surprising that very few researchers = conduct and report power analyses for their studies (Brewer, 1972; Cohen, 1= 962, 1965, 1988, 1992; Keselman et al., 1998; Onwuegbuzie, 2002; Sherron, 1= 988) even though statistical power has been promoted actively since the 196= 0s (Cohen, 1962, 1965, 1969) and even though for many types of statistical = analyses (e.g., r, z, F, ?2), tables have been provided by Cohen (1988, 199= 2) to determine the necessary sample size. Even when a priori power has bee= n calculated, it is rarely reported (Wooley & Dawson, 1983)." (Onw= uegbuzie & Leech, 2004, p. 207)

=20The "power analysis" concept was less or even not talked about= in my previous research methods and statistics courses. Here are some reas= ons why power analyses were less used or reported (Onwuegbuzie & Leech,= 2004).

=20- =20
- Researchers do not sufficiently understand the concept of statistical p= ower. =20
- The concept and applications of power are not taught, or adequately cov= ered, in many undergraduate- and graduate-level statistical courses. And, p= ower is not recognized as important as other concepts. =20
- No sufficient information is provided on how to report statistical powe= r. =20
- Research resource constraints do not allow research have enough sample = size as required by the result of a priori power analysis. =20
- It is difficulty to estimate effect sizes and standard deviations befor= e conducting a research because of the uncertainties involved. =20
- SPSS, SAS, and other software package do not have the function of condu= cting power analyses. Users need to use other software to do that. And, sof= tware for power analysis normally do not do other analysis. =20

- =20
- The power of a statistical test of a null hypothesis is the probability= that it will lead to the rejection of the null hypothesis, i.e., the proba= bility that it will result in the conclusion that the phenomenon exists. (C= ohen, 1988, p. 4) =20
- Power is the probability of detecting an effect, given that the effect = is really there. In other words, it is the probability of rejecting the nul= l hypothesis when it is in fact false. (UCLA: Academic Technology Services,= Statistical Consulting Group) =20

- =20
- It clearly represents a vital piece of information about a statistical = test applied to research data. (Cohen, 1988, p. 4) =20
- APA publication manual requires reporting power analysis. (see APA publ= ication manual 5th edition on page 24) =20
- Post hoc power analyses can be used to improve the design of independen= t replications (Onwuegbuzie & Leech, 2004, p. 225). =20

When I first touched the term "power analysis", I thought, aha=
, I can use the power analysis result to prove how confident (or correct) I=
was with the result of my statistical analysis, especially if I got a sign=
ificant result (p < .05). However, **this thought is wrong. First, many researchers did not like the idea of performing power analys=
is after the data has been collected and analyzed, they call it "post =
hoc power analysis". They propose a priori power analysis, meaning tha=
t power analysis should be performed as a part of research plan. Second, ev=
en post hoc power analysis is favored by some researchers, most of them sug=
gest reporting post hoc power analysis only when there is a non-significant=
result.**

- =20
- For the situations where significant statistical results were gained, w= hat should we do? The answer is reporting effect size and confidence interv= al (CI) around effect size. (see Onwuegbuzie & Leech, 2004) =20

- =20
- A priori power analyses should be conducted and reported; post hoc anal= yses should never be used to replace a priori analyses (Onwuegbuzie & L= eech, 2004). =20

- =20
- Post hoc power analyses should accompany statistically non-significant = findings. Statistically non-significant results in a study with high power = contribute to the body of knowledge because power can be ruled out as a thr= eat to internal validity. (Onwuegbuzie & Leech, 2004, p. 219, p. 210)=20

- =20
- The most primary method is using the tables Cohen (1988) presented. Thi= s method sounds simple but actually complex, and not so "automaticly&q= uot;. =20
- Using Power Analysis Software=20
- =20
- SPSS Sample Power - only available for Windows system=20
- PASS from NCSS Statistical Software - only for Windows sy= stem =20
- G* Power 3 - Freeware for M= ac OS X 10.4 and Windows. (I am using the Mac version on Leopard. It works = fine.) =20
- SAS Macro mpower =20
- For more information, see http:= //www.ats.ucla.edu/stat/seminars/Intro_power/default.htm under the sect= ion of "Software". =20

=20

- =20
- How to conduct post hoc power analysis for multivariate tests such as m= ultivariate analysis of variance and multivariate analysis of covariance? -= Please see Onwuegbuzie and Leech, 2004. =20

- =20
- Cohen, J. (1988).
*Statistical Power Analysis for the Behavioral Sci= ences(2nd Ed.)*. Hillsdale, NJ: Lawrence Erlbaum Associates. =20
- Onwuegbuzie, A. J., & Leech, N. L. (2004). Post hoc power: A concep=
t whose time has come.
*Understanding Statistics, 3*, 2001-230. - Av= ailable at UT online resources. =20
- UCLA: Academic Technology Services, Statistical Consulting Group.
*<= a href=3D"http://www.ats.ucla.edu/stat/seminars/Intro_power/default.htm" cl= ass=3D"external-link" rel=3D"nofollow">*. - The References section of t= his page has many useful resources.*Statistical Computing Seminars: = Introduction to Power Analysis*=20
- StatSoft, Inc..
.*Pow= er Analysis*=20