Psychology340: Hypothesis testing II: T-tests - Illinois State University [PDF]

z or t = sample mean - population mean (estimated) standard error. Rule: When you know the value of s 2, use a z-score.

3 downloads 16 Views 92KB Size

Recommend Stories


ILLINOIS STATE UNIVERSITY Purchasing Department
Ask yourself: Can I confidently say that the path I am on in life right now is the one that I (and no

Hypothesis Testing
Seek knowledge from cradle to the grave. Prophet Muhammad (Peace be upon him)

Hypothesis Testing
I tried to make sense of the Four Books, until love arrived, and it all became a single syllable. Yunus

Hypothesis Testing
Be who you needed when you were younger. Anonymous

Hypothesis Testing
Respond to every call that excites your spirit. Rumi

Classical Hypothesis Testing
Kindness, like a boomerang, always returns. Unknown

Hypothesis Testing with SPSS
Your task is not to seek for love, but merely to seek and find all the barriers within yourself that

Lecture 8 Hypothesis testing
The happiest people don't have the best of everything, they just make the best of everything. Anony

Hypothesis Development and Testing
When you talk, you are only repeating what you already know. But if you listen, you may learn something

Chapter 6 Hypothesis Testing
Your big opportunity may be right where you are now. Napoleon Hill

Idea Transcript


Psychology 340 Syllabus Statistics for the Social Sciences Illinois State University J. Cooper Cutting Fall 2002

Hypothesis testing II: t-tests Which test decision tree One sample t-test Matched samples t-test Independent samples t-test T-tests in SPSS Let's quickly recall the decision tree that we saw earlier in the semester. Which test? Find the string of decisions that lead to a 1-sample t-test. T-tests are basically the same as the one-sample z test from last time. With the earlier test statistic were concerned with situations in which we assumed that we knew the population standard devaition (s). As a result we could compute the actual standard error ( ) associated with samples of size n. T-tests are used in situations in which we DON'T know the population standard devaition (s). This is a much more common situation (that is, we usually don't know the population values because we usually don't have access to the entire population). So instead, we will have to use an estimated standard error ( ) using information that we get from our sample. What do we already know about that we can use as an estimate of s? Our best guess is to use the sample standard deviation (s). So we'll use the sample standard deviation to estimate the standard error. Recall that sample standard deviation (s) = So from this, our formula for standard error is almost the same, with some notational changes and our computed test statistic is also very similar. If we know s: standard error of =

If we don't know s: =

test statistic: z-score

estimated standard error of = s =

=

test statistic t-score

z =

t =

estimating m

estimating m

m = + (Z-crit)( )

m = + (t-crit)( )

Generally: z or t = sample mean - population mean (estimated) standard error

Rule: When you know the value of s 2 , use a z-score. If s 2 is unknown, use s2 to to estimate s 2 and use the t-statistic. The same is true for making estimations of the population mean (e.g., confidence intervals). The t statistic is used to test hypotheses about m when the value for s2 is not known. The formula for the t statistic is similar in structure to that for the z-score, except that the t statistic uses estimated standard error. Even though the formulas in the two situations are very similar, there is an important conceptual difference between the two situations. Because we are using the sample standard deviation (s) to estimate the population standard deviation (s), then we need to take into account the fact that it is an estimate. If you think back to our earlier chapter in which we discussed standard deviations of samples, you'll remember that we must take the degrees of freedom into account. Degrees of freedom describe the number of scores in a sample that are free to vary. Because the sample mean places a restriction on the value of one score in the sample, there are n 1 degrees of freedom for the sample. This means that the higher the value of n, the more representative the sample will be of the population, which in turn means that s will be a better estimate of s. It also has implications for the test statistic. The shape of the t-distribution varies as a function of the size of n (really it varies with the degrees of freedom). The bigger the n (the bigger the df), the closer the t- distribution is to a normal distribution. Notice that we're talking about a new distribution here (or family of distributions, the t-distributions). This also means that we won't be using the unit normal table. Instead we'll have to use a different table, the t distribution table (this one isn't in your textbook so you'll want to use the one from your web pages). One tail probability p 0.25 0.10 0.05 0.025 0.01 0.005 Two tail probability p df 0.50 0.20 0.10 0.05 0.02 0.01 1 2 3 4 5 6 : :

1.00 3.078 6.314 12.706 31.821 63.657 0.816 1.886 2.920 4.303 6.965 9.925 0.765 1.638 2.353 3.182 4.541 5.841 0.741 1.533 2.132 2.776 3.747 4.604 0.727 1.476 2.015 2.571 3.365 4.032 0.718 1.440 1.943 2.447 3.143 3.707 : : : : : : : : : : : :

z* 0.674 1.282 1.645 1.96 CI% 50% 80% 90% 95%

2.326 2.576 98%

99%

T-distributions table Reading this table is different than reading the unit normal table. So let's talk about why first, and then how. Why? Because the unit normal table is describing one distribution (the normal distribution). The t-distribution table is actually describing several different t distributions. This is because the there is a different t- distribution for every different degrees of freedom (although when df gets large, the differences become really small). So, each row corresponds to different t-distributions. As a result of this, there also isn't enough space to put all of the probabilities corresponding to each possible t-score. Instead what is listed are the t-scores at commonly used critical regions (that is, at popular alpha levels). The t-distribution, with an infinite (or practically very large) df's is equal to the Normal distribution. That's why the along the bottom of the table is a row of z-scores. The bottom row of the table tells you which column to look in for confidence intervals. With smaller df's the t distributions are shaped differently (although they are still uni-modal and symmetrical with a mean = 0). To examine how the shape of the t-distribution varies with df click on the following button. t-distribution

Okay, so how do we use the table? Think back to last chapter. One of the ways that we would make our decision about whether or not to reject the H 0 , was to figure out what z-score corresponded to the critical region (e.g., 1.65 = critcal z for one-tailed test with a = to 0.05), then look at our the z that we computed and see if it was greater than (or equal to) that critical z. If if was, then we rejected the H 0 , if it wasn't then we failed to reject H 0 . But keep in mind that for z-scores we use the unit normal table which only describes one distribution. So, for all one-tailed tests with an a = 0.05, the critical value of z will be 1.65. The logic is the same here with the t-table. But now, the critical values are going to change as a function of which t-distribution that we are looking at, which is in turn dependent on df. So, what you need to do to use this table is: step 1: state your H 0 and H 1 & figure out your critera: a = ? step 2: figure out if your test is one-tailed or two-tailed step 3: figure out the df for your test step 4: find the critical t-score from the table step 5: compute your t-score for your sample step 6: compare your t-score with the critical t-score step 7: make your conclusions about the H 0 note: that sometimes you will see the following notations:

tcrit = critical t from the table

tobs = observed t = Okay, let's look at a few examples Example:

Suppose that your physics professor, Dr. M. C. Squared, gives a 20 point true-false quiz to 9 students and wants to know if they did worse than guessing. Their scores were: 6, 7, 7, 8, 8, 8, 9, 9, 10. We'll assume a significance level of a = 0.05. step 1: H 0 : m > 10 H 1 : m < 10; a = 0.05 (note: the null is what they'd get if they were guessing which would be 10 out of 20). step 2: one-tailed test (worse than guessing) step 3: what is our df? n = 9, so df = 9 - 1 = 8 step 4: find the critical t from the table: df = 8, one-tailed test, a = 0.05, so tcrit = -1.86 (keep in mind that this is worse than, so the critical t is negative) step 5: compute your tobs (notice how we are bringing together most of what we've learned all into this one example (computing means, standard deviations, estimatied standard error, etc.)) = (SX)/n = 72/9 = 8.0 SS = (SX2 ) - (SX)2 / n = 588 - 722/9 = 12.0 s = sqroot(SS/n-1) = sqroot(12/8) = 1.225 est standard error = s/sqroot(n) = 1.225/sqroot(9) = 0.41 tobs =

= (10 - 8) / 0.41 = -4.88

step 6: tobs = -4.88 < tcrit = -1.86 step 7: reject the H 0 - so it looks as if the students would have been better off guessing. Example 2 Suppose that your psychology professor, Dr. I. D. Ego, gives a 20 point true-false quiz to 9 students and wants to know if they were different from groups in the past who have tended to have an average of 9.0. Their scores from the current group were: 6, 7, 7, 8, 8, 8, 9, 9, 10. Did the current group perform differently from those in the past. We'll assume a significance level of a = 0.05. step 1: H 0 : m = 9.0 and H 1 : m not equal to 9.0; a = 0.05 step 2: two-tailed test (are they different) step 3: what is our df? n = 9, so df = 9 - 1 = 8 step 4: find the critical t from the table: df = 8, two-tailed test, a = 0.05, so tcrit = ±2.306 step 5: compute your tobs = (SX)/n = 72/9 = 8.0 SS = (SX2 ) - (SX)2 / n = 588 - 722/9 = 12.0 s = sqroot(SS/n-1) = sqroot(12/8) = 1.225 est standard error = s/sqroot(n) = 1.225/sqroot(9) = 0.41 tobs =

= (9 - 8) / 0.41 = -2.44

step 6: tobs = -2.44 < tcrit = ±2.306 step 7: reject the H 0 - so it looks as if the current students are different from past students (they are doing worse). Example 3 Suppose that your psychology professor, Dr. I. D. Ego, wants to estimate the population mean for people's driving ability after 24 hours of sleep deprivation. So she develops a test of driving skill (scores ranging from 1-bad driving to 10-excellent driving) and administers it to 101 drivers who have been paid to stay awake for 24-hrs. Their scores from the group had a mean = 4.5 and a standard deviation of 1.6. Estimate the population mean with 90% confidence. So we need to use this formula: m = + (t-crit)( ) The mean of the sample is 4.5, the estimated standard error is 1.6/sqroot(101) = .16, and the tcrit with df = 100 for a 90% C.I. is = 1.66 So the upper bound is 4.5 + (.16)(1.66) = 4.77 the lower bound is 4.5 - (.16)(1.66) = 4.23

Using SPSS to compute a one-sample t-test We can use SPSS to compute one-sample t-tests.

Go to the Analyze menu and select the submenu Compare Means. In this submenu you'll see several tests. The one that we're interested in today is One-sample t-test.

After selecting One-sample t-test, you'll get a window that looks like this. Here you should select the variable from your sample that you want to analyze. Also you should enter the test value . In the examples in this lab, the test value is the population mean.

Here is what the output will look like.

Notice that the output includes the sample mean, the sample standard deviation, the standard error, the tobs (in the t column), the degrees of freedom, the mean difference (the sample mean - the test value), and a p-value. Notice that SPSS doesn't tell you to reject or fail to reject the H 0 , nor does it give you the tcrit. To make your decision about the H 0 you must compare the p-value with your a-level. If the p-value is equal to or smaller than the your a-level, then you should reject the H 0 , otherwise you should fail to rejet H 0 . Notice that in addition to the hypothesis test, the output also provides a confidence interval. 95% CI is the default. To pick other levels of confidence you need to click the options button on the one-sample t-test window. This will open the dialog box below.

Hypothesis tests analyzed with related samples t-tests In the prior lab we examined how to use a t-test to compare a treatment sample against a population (for which s isn't known). In this lab we'll consider the case where the null population m isn't known and must also be represented by a sample (like the treatment m was in the one-sample cases. Today we'll consider situations where the two samples means come from related samples. The are two ways the samples can be related. In one case, there are two separate but related samples. In the other case, there is a single sample of individuals, each of which gets measured on the dependent variable twice. Let's quickly recall the decision tree that we saw earlier in the semester. Which test? Find the string of decisions that lead to either a two-independent samples t-test, or a related samples t-test. Consider the following examples: Example 1: Suppose that you want to compare married couples opinions about what makes a relationship work. So you decide to ask the husbands and wives to rate, on a scale from 1 to 10, how important communication is. You don't know the population means for these ratings. In this senario, even though you've got two groups (husbands and wives), the two groups are not independent. The members of each group are related to each other ("related" with respect to statistical selection issues, not religous or legal issues). So we need a t-test that takes the relatedness of the groups into condsideration. Example 2: Suppose that you want to find out whether viagra impairs vision. Instead of comparing two separate groups, you decide to test the same set of individuals. In the first stage of the experiment you give your participants a placebo (a sugar pill that should have no effect on vision), and then test their vision. In the second stage, you give them viagra and then test their vision. So now you have the same people in both conditions. Clearly your samples are related, so again the t-test from the last chapter isn’t appropriate. Example 3: Suppose that you are interested in the effect of studying on test performance. So you decide to use two groups of people for your study. However, you also decide that you want the two groups of people to be as similar as possible, so you match each individual in the two groups on as many important characteristics as you can. Again, the two samples are related, so the t-test from the last chapter isn’t appropriate. In the first examples, the situation has been decided for you, there is a pre-existing relationship between the two samples. In the second and third examples, you, as the experimenter make a decision to make the two samples related. Why would you ever want to do that? To control for individual differences that might add more noise (error) to your data. In Example 2, each individual acts as their own control. In Example 3, the control group is made up of people as similar to the people in the experimental group as you could get them. Both of these designs are used to try to reduce error resulting from individual differences. A repeated-measures study is one in which a single sample of subjects is used to compare two (or more) different treatment conditions. Each individual is measured in one treatment, and then the same individual is measured again in the second treatment. Thus, a repeatted-measures study produces two (or more) sets of scores, but each set is obtained from the same sample of subjects. Sometimes this type of study is called a within-subjects design. In a matched-subjects study, each individual in one sample is matched with a subject in the other sample. The matching is done so that the two individuals are equivalent (or nearly equivalent) with respect to a specific variable that the researcher would like to control. Sometimes this type of styd is called a related-samples design. Okay, so now we know that for repeated-measures and matched-subject designs we need a new t-test. So, what is the t statistic for related samples? Again, the logic of the hypothesis test is pretty much the same as it was for the one-sample cases we've already considered. Once again we'll go through the same steps. What changes are the nature of the hypothesis, and how the t is computed. All of the tests that we've looked at are examining differences. In the previous lab we were interested in comparing a known population with a treatment sample. Now we are beginning to consider cases when the null population m is unknown and must also be represented by a sample. The t-test for this chapter is also interested in the differences, but because the two samples are related, the differences are based on differences between each individual. Consider the following example: An instructor asks his statistics class, on the first day of classes, to rate how much they like statistics, on a scale of 1 to 10 (1 hate it, 10 love it). Then, at the end of the semester, the instructor asks the same students, the same question. The instructor wants to know if taking the stats course had an impact on the students feelings about statistics. The results of the two ratings are presented below. D stands for the difference between the pre- and post-ratings for each individual. Student Pre-test (first day) Post-test (end of semester) D

D2

1

1

4

3

9

2

3

5

2

4

3

4

6

2

4

4

7

8

1

1

5

2

3

1

1

6

2

2

0

0

7

4

6

2

4

8

3

4

1

1

9

6

6

0

0

10

8

6

-2

4

S

40

50

10 28

mean difference = = 10/10 = 1.0 Okay, so let’s start our 7 step process of hypothesis testing. step 1: state your H 0 and H 1 & figure out your critera: a = ? Okay, for this example let’s assume that a = 0.05 What is our H 0 ? Conceptually it is similar to last chapter, but instead of having two separate populations of individuals, we’ve got a single population of differences. In other words, the distribution that we’re intersted in is the distribution of D, the distribution of the pre-test scores subtracted from the post-test scores. So our H 0 will be something like this, taking stats has no effect on a persons preference for statistics. H 0 : m D = 0 H 1 : m D not equal to 0 step 2: figure out if your test is one-tailed or two-tailed Does taking stats have an impact on feelings about statistics? This just asks about a general difference, so it is a two-tailed test. step 3: figure out the df for your test Now we only a single sample, a sample of the differences. With only one sample, our df = n - 1. step 4: find the critical t-score from the table Finding tcrit is the same as ususal, look at the table. a = 0.05, two-tailed, df = 10 - 1 = 9 tcrit = ± 2.262 step 5: compute your t-score for your sample Okay, as was the case last lab, the overall form of the t statistic equation is the same, but the details are different. tobs = So we already computed our , and we know m D = 0 (for the H 0 ), so we just need to figure out what is equal to. This is the estimated standard error of the difference distribution. So first we need to figure out the variance. SSD = S D 2 - (S D)2 )/n = 28 - (102 )/10 = 28 - 10 = 18 sD2 =

=

= = 2.0

Now we can figure out the estimated standard error =

=

= 0.447

Now we are read to compute our tobs tobs =

=

= 2.24

step 6: compare your t-score with the critical t-score tcrit = 2.262 tobs = 2.24 step 7: make your conclusions about the H 0 our tobs does not fit in the critical region, so we fail to reject the H 0 . However, if we had made a directional hypothesis, that the stats class would increase preference of stats. What would happen? We’d increase the power of our test to detect a difference (because we are looking at 0.05 in only one tail, instead of 0.025 in two tails). Our tcrit = 1.833. Our tobs would still be the same (2.24), so now in step 7 we would end up rejecting the H 0 . Okay, what about Hypothesis testing with a matched-subject design? Basically we do things exactly as we did in the previous example, except now we subtract the matched control person’s score from the experimental group person. So, as an experimenter, how do we know when to use related sample designs or independent sample designs? Related samples designs are used when large individual differences are expected and considered to be "normal". Why? Because individual differences can contribute to sampling error. So by using related samples designs, one can reduce sampling error and have a better chance of finding a difference if there really is one. Computing Confidence Intervals for the difference As was true for the related samples t-test, the formula for confidence intervals looks very much like the formula for a 1 sample t-test. The only differences are that we base the computation on the computed difference (D) column and that what we are estimating is the difference between the two populations. m = + (t-crit)( ) Using the example above, estimate the difference between the two populations with 95% confidence. tcrit = 2.262 =

=

= 0.447

mean difference = = 10/10 = 1.0 So m D = 1.0 +(2.262)(0.447)

T-distributions table

Using SPSS to compute a related samples (paired samples) t-test We can use SPSS to compute paired samples t-tests. To set up a paired samples t-test you will need two columns of data, one for each sample (related samples) or one for each meansurement (repeated measures).

Go to the Analyze menu and select the submenu Compare Means. In this submenu you'll see several tests. The one that we're interested in today is paired samples t-test.

After selecting Paired samples t-test, you'll get a window that looks like this. Here you should select the variables that you are testing.

Here is what the output will look like.

Notice that the output includes the sample mean, the sample standard deviation, the standard error, the tobs (in the t column), the degrees of freedom, the mean difference (sample mean1 - sample mean2 ), and a p-value (sig.). Notice that SPSS doesn't tell you to reject or fail to reject the H 0 , nor does it give you the tcrit. To make your decision about the H 0 you must compare the p-value with your a-level. If the p-value is equal to or smaller than the your a-level, then you should reject the H 0 , otherwise you should fail to rejet H 0 . Notice that in addition to the hypothesis test, the output also provides a confidence interval. 95% CI is the default. To pick other levels of confidence you need to click the options button on the one-sample t-test window. This will open the dialog box below.

Two independent samples t-tests The section above used a different computational formula to calculate the observed t for two more situations: repeated measures, in which there is one sample, but each individual is tested twice. matched pairs, in which there are two samples, but they are related on a subject by subject basis. The basic logic of the independent samples t-test should seem similar to the other tests that we've covered. We still use the t-distribution to find our critical values. However things get a little more complicated, because of the situation that we are interested in. Now we are going to look at a situation where we are interested in the potential difference between two different populations. And again, we'll deal with situations in which we don't know the s for either of these populations, so we'll have to use estimates. An experiment that uses a separate independent samples for each treatment condition (or each population) is called an independent-measures research design. Often you'll also see it referred to as a between-subjects or between-groups desgin. So we'll use the same logic and steps for hypothesis testing that we used in the previous labs, and fill in the details of the differences as we go. step 1: state your H 0 and H 1 & figure out your critera: a = ? step 2: figure out if your test is one-tailed or two-tailed step 3: figure out the df for your test step 4: find the critical t-score from the table step 5: compute your t-score for your sample step 6: compare your t-score with the critical t-score step 7: make your conclusions about the H 0 Let's start with step 1. Figuring out your critera is exactly the same process as before, you pick what your field has decided as being an accepted level of alpha (chance of making a type I error). For our example, let's assume a = 0.05 The hypotheses are going to be a bit different looking, because the situation is different. Remember, that now we are making hypotheses about two different populations. For example, suppose that you want to compare two different treatments (e.g., two ways of studing, two different drugs, etc), or you want to compare two groups of people (e.g., men vs. women, young vs. old, etc.). So now, the hypotheses are about population A (men) and population B (women), and how they are different from one another. Suppose that we are interested in how tall men and women are.

So the H 0 hypothesis would be that men and women are the same height. That is, H 0 : m A = m B - or H 0 : m A - m B = 0 Our alternative hypthesis could be that men and women are different heights. That is, H 1 : m A not equal tom B - or H 1 : m A - m B not equal to 0 Step 2. Okay, is this a one-tailed or two-tailed hypothesis? It is not directional, so it is a two-tailed test. What might the hypothesis be for a one-tailed test? Men are taller than women. H 0 : m A = m B & H 1 : m A > m B Step 3. What are the degrees of freedom? Well we need some more information about our example before we can answer this questions. Let's start with this conceptually, then fill in the necissary details in our example. We are going to be using two samples, one to represent each population. Remember, that because we're using samples, we can only estimate the values of the population parameters and so we're going to need to take degrees of freedom into account. Any guesses as to how we'll compute our df? Think about it this way, with one sample we used n - 1 because all of the values in the sample are free to vary but one, because we know the value of the sample mean. Now consider the current situation. We've got two samples. How many values are free to vary? sample 1: n A- 1 sample 2: n B- 1 so together there are nA + nB - 2 = df Okay, so additional information do we need for our example? We need to know how many individuals we have in our samples. This is a good time to look at some sample data men's heights: 67, 73, 74, 70, 70, 75, 73, 68, 69 women's heights: 69, 63, 67, 64, 61, 66, 60, 63, 63 so what is nA? = 9 so what is nB? = 9 So the df for our example is: nA + nB - 2 = 9 + 9 - 2 = 16 Step 4. So what is our critical t? Go to the table, look up the value for: two-tailed, a = 0.05, df = 16. tcrit = 2.12 Step 5. Now comes what will look to be the big difference. We need to compute our observed t statistic. Basically, at the conceptual level, the formula is the same. However, at the practical level, it is a bit more complex because we have two samples, which means that we have two estimates. Let's break this formula into several parts. conceptually: tobs = in other words, we're intersted in the difference between the two populations, so to compute the t statistic we need to see if the difference between our two samples is different from the difference between the two populations. So the numerator is pretty much straight forward: = the difference between the two sample means (m A - m B) = 0: remember that's the H 0 and that's what we're testing The denominator is where things will look a bit more complex: what is

?

this is the an estimate of the error from the two samples. Recall that each sample will have some sampling error associated with it. What we need to do here is pool the error from the two samples. The reason that we want to pool the samples is to make the estimate of the standard error better. Basically, what we're doing is increasing the sample size that our estimate is based on, which will increase the percision of the estimate. - because each sample may be of different sizes (n's) we need to weight each sample's estimate of variability by its degrees of freedom. pooled variance = we can simplify the equation, recall that s2 = SS/df so, by substituting SS for df(s2 ) we get: pooled variance = Okay, notice that we're not at

yet. We still need to compute that.

Remember that the formula for estimated standard error of : = The formula for

is similar:

= So let's fill in the numbers from our example. First we need to go back to the raw numbers and compute the SS's and the sample means. Here are the results or those computations: = 71.0 = 64.0 SSA = 64.0 SSB = 66.0 sA = 2.83 sB = 2.87 So, =

= 8.125

So, =

=

= 1.34

Now let's put together the whole t statistic (finishing step 4) tobs =

=

= 5.22

step 6. So now we compare the two t statistics. tobs = 5.22 tcrit = 2.12 step 7: Our observed (computed) t statistic is greater than the critical t statistic, so we feel confident in rejecting the H 0 . There does seem to be a difference between the heights of men and women. Computing confidence intervals with two independent samples is very similar to what we've done in past labs, except that we use the estimate of the standard error that is derived from our pooled variance and both of the sample means are used. So the formula is: m = ( 1 - 2 ) + (tcrit)(

)

What are the assumptions of our independent measures t test? 1) The observations are independent (both between and within groups) 2) The two populations are normally distributed (also discussed in previous labs) ** new ** 3) The two populations have equal variances. This is referred to as homogeneity of variance. recall that in the formula we pool our sample variances. This is an okay thing to do if the variances are about the same. However, it isn't okay if they are very different. In our next lab we'll discuss a test to answer this question.

Using SPSS to compute 2-independent samples t-tests We can use SPSS to compute paired samples t-tests. To set up a paired samples t-test you will need two columns of data, one for each sample (related samples) or one for each meansurement (repeated measures). Note: To do an independent samples t-test you'll need to have two variables (columns) in your data file. One column will contain the data (your dependent measure). The other column will be an independent variable that specifies which group the subject belongs to (e.g., 1 for group 1, 2 for group 2).

Go to the Analyze menu and select the submenu Compare Means. In this submenu you'll see several tests. The one that we're interested in today is independent samples t-test.

After selecting Independent samples t-test, you'll get a window that looks like this. Here you should select the variables that you are testing. Your test variable is your dependent variable. Your group variable is the independent variable that assigns each subject to a group.

Before you can do the analysis, you must define the groups. Click the button and then enter the values that you used to define the groups (e.g., 1 for group 1 and 2 for group 2).

Here is what the output will look like. NEED TO INSERT SOME STUFF ABOUT THE ASSUMPTION OF HOMOGENEITY OF VARIANCE AND THE LEVENE'S TEST (THAT SPSS GIVES IN THE OUTPUT)

Notice that the output includes the usual information the sample mean, the sample standard deviation, the standard error, the tobs (in the t column), the degrees of freedom, the mean difference (sample mean1 - sample mean2 ), and a p-value (sig.). Notice that SPSS doesn't tell you to reject or fail to reject the H 0 , nor does it give you the tcrit. To make your decision about the H 0 you must compare the p-value with your a-level. If the p-value is equal to or smaller than the your a-level, then you should reject the H 0 , otherwise you should fail to rejet H 0 . You may also notice that there are two rows of numbers in the t-test output. One row "assumes equal variance" the other doesn't. This is related to the assumption of homogeneity of variance discussed above. We'll discuss this in more detail in the next lab. Notice that in addition to the hypothesis test, the output also provides a confidence interval. 95% CI is the default. To pick other levels of confidence you need to click the options button on the one-sample t-test window. This will open the dialog box below.

If you have any questions, please feel free to contact me at [email protected].

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.