# Effect Size and Power

There has been a great emphasis in our classes so far on Effect Size and Power. So I'm interested in how these can be calculated using R.

## Shouldn't t.test() show me power and effect size?

If I run a t-Test using t.test() output doesn't include the Effect Size and Power. Is there a way to run t.test so that these figures are automatically included in the output?

No. You need to use power.t.test() to determine power. t.test does report the unstandardised effect size: it gives you the difference in means. To calculate a standardised effect size, you might load the psych library, and use the p.rep.t() function, which reports a range of statistics such as P-rep (a transformation of a p value into the probability that the effect will be replicable, given no other prior information), but also an effect size measure (Cohen's d or dprime), and also the equivalent correlation coefficient (another standardised measure of effect size)

Here is an example (using the p.rep.t() function from the psych library), from an experiment with 25 people in each of two groups, which was significant (p=.002)

```
> p.rep.t(t=3.4,df=25)
$p.rep = 0.977
$dprime = 1.36
$prob = 0.0022
$r.equiv = 0.562
```

PS: Calculating power in other designs is not always straightforward. There are specialised libraries in R (for instance library(pwr) has "pwr.r.test(n=, r=)" for power in a regression, but other applications exist for complex ANCOVAs, or genetics designs.

**note also** that there are different standardized measures of effect, such as r, d, and odds-ratios, but these do not have the same scales, so you can't compare them directly. For example, r.equiv is smaller than dprime, even though they are expressing exactly the same effect.

So if someone finds that low and high happiness groups differ by 1.36 SD units on positive life events, and someone else repeats the experiment using a continuous measure of happiness, they should expect a correlation of .56. To explore this, you could use p.rep.r with the r.equiv from p.rep.t().

```
> p.rep.r(r=.562,n=25) # 25 people measured twice
$p.rep = 0.986
$dprime = 1.36
$prob = 0.0018
```

The dprime comes out to that calculated by p.rep.t.

## How do I calculate power after I have run a study?

I can see how power.t.test() could be useful in advance of a study - where I make an estimate of Effect Size and set a Power of, for example, 0.8, and thus determine suitable sample sizes to use. But I can 't see how to use this function after I've run a study…

Power can be calculated before or after you run a study. The power calculation simply wants to know what it is you are trying to find (how big an effect you think exists in the world), then how you are going to look for it (number of subjects and the strength of evidence (p) that you will use). Of course you mostly don;t know the effect size - you have to have a guess at the kind of effect you the smallest effect that you would find interesting.

## How can I do a t.test when the groups are different sizes?

I understand how I can use t.test() for a t-test for independent means when my two samples have equal numbers, but how can I use this test if my sample sizes are unequal?

t.test() will happily accept unequal n's - Try running this, for instance.

t.test(c(1:9), c(1:5))

# What should I learn about R?

We use R primarily so you can try statistics out for yourself: Make up a hundred datasets and see how power determines the chance of finding a result… change t into r, look and play and learn.

We also learn R because it can do almost any of the statistics that are needed in Science and the Humanities and it is free so you can take it with you. That said, you really don't need to learn much to **Get Things Done**.

Just about everything you will want to do can be thought of in terms of regression/Anova. If not, let us know.

This sheet has a helpful reminder list of functions to get regressions done Regression Help sheet

The statmethods web site is fantastic for learning how to do statistics in R.